Test Report: KVM_Linux_crio 19679

                    
                      7cae0481c1ae024841826a3639f158d099448b48:2024-09-20:36298
                    
                

Test fail (33/311)

Order failed test Duration
33 TestAddons/parallel/Registry 74.54
34 TestAddons/parallel/Ingress 152.56
36 TestAddons/parallel/MetricsServer 350.62
118 TestFunctional/parallel/ImageCommands/ImageListShort 2.31
122 TestFunctional/parallel/ImageCommands/ImageBuild 6.97
163 TestMultiControlPlane/serial/StopSecondaryNode 141.87
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.73
165 TestMultiControlPlane/serial/RestartSecondaryNode 6.36
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 370.98
170 TestMultiControlPlane/serial/StopCluster 141.74
230 TestMultiNode/serial/RestartKeepsNodes 326.38
232 TestMultiNode/serial/StopMultiNode 144.73
239 TestPreload 172.53
247 TestKubernetesUpgrade 404.36
285 TestPause/serial/SecondStartNoReconfiguration 92.49
315 TestStartStop/group/old-k8s-version/serial/FirstStart 278.22
338 TestStartStop/group/no-preload/serial/Stop 139.12
341 TestStartStop/group/embed-certs/serial/Stop 139.1
344 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.01
345 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
347 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
348 TestStartStop/group/old-k8s-version/serial/DeployApp 0.51
349 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 86.64
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
355 TestStartStop/group/old-k8s-version/serial/SecondStart 724.59
356 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.38
357 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.44
358 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.42
359 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.65
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 462.44
361 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 438.13
362 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 330.29
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 165.28
x
+
TestAddons/parallel/Registry (74.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 3.408902ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-7g6lm" [4ad8ab0b-f43b-475a-984c-11d2a23963c0] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002888602s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-k96rm" [0612b678-15da-44d6-acfb-c29dd8dd2b7d] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003321442s
addons_test.go:338: (dbg) Run:  kubectl --context addons-679190 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-679190 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-679190 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.081682042s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-679190 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 ip
2024/09/20 17:47:44 [DEBUG] GET http://192.168.39.158:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-679190 -n addons-679190
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-679190 logs -n 25: (1.73118349s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-591101 | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC |                     |
	|         | -p download-only-591101                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| delete  | -p download-only-591101                                                                     | download-only-591101 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| start   | -o=json --download-only                                                                     | download-only-799771 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC |                     |
	|         | -p download-only-799771                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| delete  | -p download-only-799771                                                                     | download-only-799771 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| delete  | -p download-only-591101                                                                     | download-only-591101 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| delete  | -p download-only-799771                                                                     | download-only-799771 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-242308 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC |                     |
	|         | binary-mirror-242308                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46511                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-242308                                                                     | binary-mirror-242308 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| addons  | disable dashboard -p                                                                        | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC |                     |
	|         | addons-679190                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC |                     |
	|         | addons-679190                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-679190 --wait=true                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:38 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | -p addons-679190                                                                            |                      |         |         |                     |                     |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | -p addons-679190                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-679190 ssh cat                                                                       | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | /opt/local-path-provisioner/pvc-4a7cfa23-ab8c-4f3b-b69f-a32cbb6790dc_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | addons-679190                                                                               |                      |         |         |                     |                     |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:47 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC | 20 Sep 24 17:47 UTC |
	|         | addons-679190                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-679190 ssh curl -s                                                                   | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-679190 addons                                                                        | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-679190 ip                                                                            | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC | 20 Sep 24 17:47 UTC |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC | 20 Sep 24 17:47 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:36:14
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:36:14.402655  245557 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:36:14.402933  245557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:36:14.402943  245557 out.go:358] Setting ErrFile to fd 2...
	I0920 17:36:14.402948  245557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:36:14.403159  245557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 17:36:14.403805  245557 out.go:352] Setting JSON to false
	I0920 17:36:14.404822  245557 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4717,"bootTime":1726849057,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:36:14.404931  245557 start.go:139] virtualization: kvm guest
	I0920 17:36:14.407275  245557 out.go:177] * [addons-679190] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:36:14.408502  245557 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 17:36:14.408541  245557 notify.go:220] Checking for updates...
	I0920 17:36:14.411057  245557 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:36:14.412803  245557 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:36:14.414198  245557 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:36:14.415792  245557 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:36:14.417282  245557 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:36:14.418952  245557 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:36:14.453245  245557 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 17:36:14.454782  245557 start.go:297] selected driver: kvm2
	I0920 17:36:14.454802  245557 start.go:901] validating driver "kvm2" against <nil>
	I0920 17:36:14.454819  245557 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:36:14.455638  245557 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:36:14.455744  245557 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:36:14.473296  245557 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:36:14.473373  245557 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:36:14.473597  245557 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:36:14.473630  245557 cni.go:84] Creating CNI manager for ""
	I0920 17:36:14.473686  245557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 17:36:14.473698  245557 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 17:36:14.473755  245557 start.go:340] cluster config:
	{Name:addons-679190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-679190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:36:14.473865  245557 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:36:14.475815  245557 out.go:177] * Starting "addons-679190" primary control-plane node in "addons-679190" cluster
	I0920 17:36:14.477065  245557 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:36:14.477119  245557 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 17:36:14.477134  245557 cache.go:56] Caching tarball of preloaded images
	I0920 17:36:14.477218  245557 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:36:14.477230  245557 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:36:14.477537  245557 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/config.json ...
	I0920 17:36:14.477565  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/config.json: {Name:mk111f108190ba76ef8034134b6af7b7147db588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:14.477758  245557 start.go:360] acquireMachinesLock for addons-679190: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:36:14.477823  245557 start.go:364] duration metric: took 47.775µs to acquireMachinesLock for "addons-679190"
	I0920 17:36:14.477861  245557 start.go:93] Provisioning new machine with config: &{Name:addons-679190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-679190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:36:14.477966  245557 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 17:36:14.479569  245557 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 17:36:14.479725  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:14.479766  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:14.495292  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36423
	I0920 17:36:14.495863  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:14.496485  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:14.496509  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:14.496865  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:14.497041  245557 main.go:141] libmachine: (addons-679190) Calling .GetMachineName
	I0920 17:36:14.497187  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:14.497338  245557 start.go:159] libmachine.API.Create for "addons-679190" (driver="kvm2")
	I0920 17:36:14.497372  245557 client.go:168] LocalClient.Create starting
	I0920 17:36:14.497411  245557 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 17:36:14.582390  245557 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 17:36:14.704786  245557 main.go:141] libmachine: Running pre-create checks...
	I0920 17:36:14.704815  245557 main.go:141] libmachine: (addons-679190) Calling .PreCreateCheck
	I0920 17:36:14.705320  245557 main.go:141] libmachine: (addons-679190) Calling .GetConfigRaw
	I0920 17:36:14.705938  245557 main.go:141] libmachine: Creating machine...
	I0920 17:36:14.705960  245557 main.go:141] libmachine: (addons-679190) Calling .Create
	I0920 17:36:14.706168  245557 main.go:141] libmachine: (addons-679190) Creating KVM machine...
	I0920 17:36:14.707572  245557 main.go:141] libmachine: (addons-679190) DBG | found existing default KVM network
	I0920 17:36:14.708407  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:14.708217  245579 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00020b330}
	I0920 17:36:14.708450  245557 main.go:141] libmachine: (addons-679190) DBG | created network xml: 
	I0920 17:36:14.708468  245557 main.go:141] libmachine: (addons-679190) DBG | <network>
	I0920 17:36:14.708486  245557 main.go:141] libmachine: (addons-679190) DBG |   <name>mk-addons-679190</name>
	I0920 17:36:14.708539  245557 main.go:141] libmachine: (addons-679190) DBG |   <dns enable='no'/>
	I0920 17:36:14.708569  245557 main.go:141] libmachine: (addons-679190) DBG |   
	I0920 17:36:14.708581  245557 main.go:141] libmachine: (addons-679190) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 17:36:14.708596  245557 main.go:141] libmachine: (addons-679190) DBG |     <dhcp>
	I0920 17:36:14.708609  245557 main.go:141] libmachine: (addons-679190) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 17:36:14.708620  245557 main.go:141] libmachine: (addons-679190) DBG |     </dhcp>
	I0920 17:36:14.708631  245557 main.go:141] libmachine: (addons-679190) DBG |   </ip>
	I0920 17:36:14.708640  245557 main.go:141] libmachine: (addons-679190) DBG |   
	I0920 17:36:14.708651  245557 main.go:141] libmachine: (addons-679190) DBG | </network>
	I0920 17:36:14.708660  245557 main.go:141] libmachine: (addons-679190) DBG | 
	I0920 17:36:14.714317  245557 main.go:141] libmachine: (addons-679190) DBG | trying to create private KVM network mk-addons-679190 192.168.39.0/24...
	I0920 17:36:14.786920  245557 main.go:141] libmachine: (addons-679190) DBG | private KVM network mk-addons-679190 192.168.39.0/24 created
	I0920 17:36:14.786967  245557 main.go:141] libmachine: (addons-679190) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190 ...
	I0920 17:36:14.786983  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:14.786868  245579 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:36:14.787006  245557 main.go:141] libmachine: (addons-679190) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 17:36:14.787026  245557 main.go:141] libmachine: (addons-679190) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 17:36:15.067231  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:15.067014  245579 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa...
	I0920 17:36:15.314104  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:15.313891  245579 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/addons-679190.rawdisk...
	I0920 17:36:15.314159  245557 main.go:141] libmachine: (addons-679190) DBG | Writing magic tar header
	I0920 17:36:15.314176  245557 main.go:141] libmachine: (addons-679190) DBG | Writing SSH key tar header
	I0920 17:36:15.314187  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:15.314075  245579 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190 ...
	I0920 17:36:15.314203  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190
	I0920 17:36:15.314278  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190 (perms=drwx------)
	I0920 17:36:15.314312  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:36:15.314323  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 17:36:15.314336  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:36:15.314343  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 17:36:15.314349  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 17:36:15.314357  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 17:36:15.314367  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:36:15.314379  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:36:15.314391  245557 main.go:141] libmachine: (addons-679190) Creating domain...
	I0920 17:36:15.314402  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:36:15.314413  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:36:15.314423  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home
	I0920 17:36:15.314435  245557 main.go:141] libmachine: (addons-679190) DBG | Skipping /home - not owner
	I0920 17:36:15.315774  245557 main.go:141] libmachine: (addons-679190) define libvirt domain using xml: 
	I0920 17:36:15.315815  245557 main.go:141] libmachine: (addons-679190) <domain type='kvm'>
	I0920 17:36:15.315826  245557 main.go:141] libmachine: (addons-679190)   <name>addons-679190</name>
	I0920 17:36:15.315834  245557 main.go:141] libmachine: (addons-679190)   <memory unit='MiB'>4000</memory>
	I0920 17:36:15.315864  245557 main.go:141] libmachine: (addons-679190)   <vcpu>2</vcpu>
	I0920 17:36:15.315882  245557 main.go:141] libmachine: (addons-679190)   <features>
	I0920 17:36:15.315888  245557 main.go:141] libmachine: (addons-679190)     <acpi/>
	I0920 17:36:15.315893  245557 main.go:141] libmachine: (addons-679190)     <apic/>
	I0920 17:36:15.315898  245557 main.go:141] libmachine: (addons-679190)     <pae/>
	I0920 17:36:15.315903  245557 main.go:141] libmachine: (addons-679190)     
	I0920 17:36:15.315908  245557 main.go:141] libmachine: (addons-679190)   </features>
	I0920 17:36:15.315915  245557 main.go:141] libmachine: (addons-679190)   <cpu mode='host-passthrough'>
	I0920 17:36:15.315933  245557 main.go:141] libmachine: (addons-679190)   
	I0920 17:36:15.315947  245557 main.go:141] libmachine: (addons-679190)   </cpu>
	I0920 17:36:15.315956  245557 main.go:141] libmachine: (addons-679190)   <os>
	I0920 17:36:15.315968  245557 main.go:141] libmachine: (addons-679190)     <type>hvm</type>
	I0920 17:36:15.315977  245557 main.go:141] libmachine: (addons-679190)     <boot dev='cdrom'/>
	I0920 17:36:15.315988  245557 main.go:141] libmachine: (addons-679190)     <boot dev='hd'/>
	I0920 17:36:15.315997  245557 main.go:141] libmachine: (addons-679190)     <bootmenu enable='no'/>
	I0920 17:36:15.316006  245557 main.go:141] libmachine: (addons-679190)   </os>
	I0920 17:36:15.316011  245557 main.go:141] libmachine: (addons-679190)   <devices>
	I0920 17:36:15.316017  245557 main.go:141] libmachine: (addons-679190)     <disk type='file' device='cdrom'>
	I0920 17:36:15.316028  245557 main.go:141] libmachine: (addons-679190)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/boot2docker.iso'/>
	I0920 17:36:15.316043  245557 main.go:141] libmachine: (addons-679190)       <target dev='hdc' bus='scsi'/>
	I0920 17:36:15.316055  245557 main.go:141] libmachine: (addons-679190)       <readonly/>
	I0920 17:36:15.316064  245557 main.go:141] libmachine: (addons-679190)     </disk>
	I0920 17:36:15.316073  245557 main.go:141] libmachine: (addons-679190)     <disk type='file' device='disk'>
	I0920 17:36:15.316112  245557 main.go:141] libmachine: (addons-679190)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:36:15.316140  245557 main.go:141] libmachine: (addons-679190)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/addons-679190.rawdisk'/>
	I0920 17:36:15.316151  245557 main.go:141] libmachine: (addons-679190)       <target dev='hda' bus='virtio'/>
	I0920 17:36:15.316156  245557 main.go:141] libmachine: (addons-679190)     </disk>
	I0920 17:36:15.316164  245557 main.go:141] libmachine: (addons-679190)     <interface type='network'>
	I0920 17:36:15.316171  245557 main.go:141] libmachine: (addons-679190)       <source network='mk-addons-679190'/>
	I0920 17:36:15.316176  245557 main.go:141] libmachine: (addons-679190)       <model type='virtio'/>
	I0920 17:36:15.316183  245557 main.go:141] libmachine: (addons-679190)     </interface>
	I0920 17:36:15.316188  245557 main.go:141] libmachine: (addons-679190)     <interface type='network'>
	I0920 17:36:15.316195  245557 main.go:141] libmachine: (addons-679190)       <source network='default'/>
	I0920 17:36:15.316200  245557 main.go:141] libmachine: (addons-679190)       <model type='virtio'/>
	I0920 17:36:15.316206  245557 main.go:141] libmachine: (addons-679190)     </interface>
	I0920 17:36:15.316219  245557 main.go:141] libmachine: (addons-679190)     <serial type='pty'>
	I0920 17:36:15.316228  245557 main.go:141] libmachine: (addons-679190)       <target port='0'/>
	I0920 17:36:15.316240  245557 main.go:141] libmachine: (addons-679190)     </serial>
	I0920 17:36:15.316250  245557 main.go:141] libmachine: (addons-679190)     <console type='pty'>
	I0920 17:36:15.316263  245557 main.go:141] libmachine: (addons-679190)       <target type='serial' port='0'/>
	I0920 17:36:15.316272  245557 main.go:141] libmachine: (addons-679190)     </console>
	I0920 17:36:15.316283  245557 main.go:141] libmachine: (addons-679190)     <rng model='virtio'>
	I0920 17:36:15.316295  245557 main.go:141] libmachine: (addons-679190)       <backend model='random'>/dev/random</backend>
	I0920 17:36:15.316305  245557 main.go:141] libmachine: (addons-679190)     </rng>
	I0920 17:36:15.316313  245557 main.go:141] libmachine: (addons-679190)     
	I0920 17:36:15.316348  245557 main.go:141] libmachine: (addons-679190)     
	I0920 17:36:15.316373  245557 main.go:141] libmachine: (addons-679190)   </devices>
	I0920 17:36:15.316383  245557 main.go:141] libmachine: (addons-679190) </domain>
	I0920 17:36:15.316393  245557 main.go:141] libmachine: (addons-679190) 
	I0920 17:36:15.320892  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:3c:0d:15 in network default
	I0920 17:36:15.321583  245557 main.go:141] libmachine: (addons-679190) Ensuring networks are active...
	I0920 17:36:15.321600  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:15.322455  245557 main.go:141] libmachine: (addons-679190) Ensuring network default is active
	I0920 17:36:15.322876  245557 main.go:141] libmachine: (addons-679190) Ensuring network mk-addons-679190 is active
	I0920 17:36:15.323465  245557 main.go:141] libmachine: (addons-679190) Getting domain xml...
	I0920 17:36:15.324200  245557 main.go:141] libmachine: (addons-679190) Creating domain...
	I0920 17:36:16.552011  245557 main.go:141] libmachine: (addons-679190) Waiting to get IP...
	I0920 17:36:16.552931  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:16.553409  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:16.553467  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:16.553404  245579 retry.go:31] will retry after 233.074861ms: waiting for machine to come up
	I0920 17:36:16.788019  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:16.788566  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:16.788598  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:16.788486  245579 retry.go:31] will retry after 254.61991ms: waiting for machine to come up
	I0920 17:36:17.044950  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:17.045459  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:17.045481  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:17.045403  245579 retry.go:31] will retry after 378.47406ms: waiting for machine to come up
	I0920 17:36:17.424996  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:17.425465  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:17.425530  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:17.425456  245579 retry.go:31] will retry after 555.098735ms: waiting for machine to come up
	I0920 17:36:17.982414  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:17.982850  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:17.982872  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:17.982792  245579 retry.go:31] will retry after 674.733173ms: waiting for machine to come up
	I0920 17:36:18.658928  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:18.659386  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:18.659419  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:18.659377  245579 retry.go:31] will retry after 611.03774ms: waiting for machine to come up
	I0920 17:36:19.272181  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:19.272670  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:19.272694  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:19.272607  245579 retry.go:31] will retry after 945.481389ms: waiting for machine to come up
	I0920 17:36:20.219424  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:20.219953  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:20.219984  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:20.219887  245579 retry.go:31] will retry after 1.421505917s: waiting for machine to come up
	I0920 17:36:21.643502  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:21.643959  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:21.643984  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:21.643882  245579 retry.go:31] will retry after 1.172513378s: waiting for machine to come up
	I0920 17:36:22.818244  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:22.818633  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:22.818660  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:22.818591  245579 retry.go:31] will retry after 1.867074328s: waiting for machine to come up
	I0920 17:36:24.687694  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:24.688210  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:24.688237  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:24.688136  245579 retry.go:31] will retry after 2.905548451s: waiting for machine to come up
	I0920 17:36:27.597342  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:27.597969  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:27.597998  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:27.597896  245579 retry.go:31] will retry after 3.379184262s: waiting for machine to come up
	I0920 17:36:30.979086  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:30.979495  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:30.979519  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:30.979448  245579 retry.go:31] will retry after 3.110787974s: waiting for machine to come up
	I0920 17:36:34.093921  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.094329  245557 main.go:141] libmachine: (addons-679190) Found IP for machine: 192.168.39.158
	I0920 17:36:34.094349  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has current primary IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.094357  245557 main.go:141] libmachine: (addons-679190) Reserving static IP address...
	I0920 17:36:34.094749  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find host DHCP lease matching {name: "addons-679190", mac: "52:54:00:40:27:d9", ip: "192.168.39.158"} in network mk-addons-679190
	I0920 17:36:34.175576  245557 main.go:141] libmachine: (addons-679190) Reserved static IP address: 192.168.39.158
	I0920 17:36:34.175604  245557 main.go:141] libmachine: (addons-679190) DBG | Getting to WaitForSSH function...
	I0920 17:36:34.175611  245557 main.go:141] libmachine: (addons-679190) Waiting for SSH to be available...
	I0920 17:36:34.178818  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.179284  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.179318  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.179535  245557 main.go:141] libmachine: (addons-679190) DBG | Using SSH client type: external
	I0920 17:36:34.179710  245557 main.go:141] libmachine: (addons-679190) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa (-rw-------)
	I0920 17:36:34.179795  245557 main.go:141] libmachine: (addons-679190) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:36:34.179828  245557 main.go:141] libmachine: (addons-679190) DBG | About to run SSH command:
	I0920 17:36:34.179847  245557 main.go:141] libmachine: (addons-679190) DBG | exit 0
	I0920 17:36:34.306044  245557 main.go:141] libmachine: (addons-679190) DBG | SSH cmd err, output: <nil>: 
	I0920 17:36:34.306371  245557 main.go:141] libmachine: (addons-679190) KVM machine creation complete!
	I0920 17:36:34.306713  245557 main.go:141] libmachine: (addons-679190) Calling .GetConfigRaw
	I0920 17:36:34.307406  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:34.307658  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:34.307833  245557 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:36:34.307846  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:34.309410  245557 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:36:34.309438  245557 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:36:34.309444  245557 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:36:34.309450  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.312360  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.312741  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.312770  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.312993  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:34.313211  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.313408  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.313560  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:34.313751  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:34.314059  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:34.314074  245557 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:36:34.421222  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:36:34.421246  245557 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:36:34.421255  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.424519  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.424951  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.424984  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.425125  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:34.425370  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.425509  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.425630  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:34.425752  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:34.425952  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:34.425963  245557 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:36:34.534619  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:36:34.534731  245557 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:36:34.534745  245557 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:36:34.534753  245557 main.go:141] libmachine: (addons-679190) Calling .GetMachineName
	I0920 17:36:34.535038  245557 buildroot.go:166] provisioning hostname "addons-679190"
	I0920 17:36:34.535064  245557 main.go:141] libmachine: (addons-679190) Calling .GetMachineName
	I0920 17:36:34.535245  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.538122  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.538459  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.538489  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.538610  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:34.538795  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.538955  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.539101  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:34.539263  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:34.539465  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:34.539483  245557 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-679190 && echo "addons-679190" | sudo tee /etc/hostname
	I0920 17:36:34.663598  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-679190
	
	I0920 17:36:34.663632  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.666622  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.667078  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.667114  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.667316  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:34.667476  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.667667  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.667787  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:34.667933  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:34.668103  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:34.668119  245557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-679190' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-679190/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-679190' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:36:34.787041  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:36:34.787076  245557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 17:36:34.787136  245557 buildroot.go:174] setting up certificates
	I0920 17:36:34.787154  245557 provision.go:84] configureAuth start
	I0920 17:36:34.787172  245557 main.go:141] libmachine: (addons-679190) Calling .GetMachineName
	I0920 17:36:34.787485  245557 main.go:141] libmachine: (addons-679190) Calling .GetIP
	I0920 17:36:34.790870  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.791296  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.791324  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.791540  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.793848  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.794252  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.794283  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.794450  245557 provision.go:143] copyHostCerts
	I0920 17:36:34.794535  245557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 17:36:34.794685  245557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 17:36:34.794773  245557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 17:36:34.794847  245557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.addons-679190 san=[127.0.0.1 192.168.39.158 addons-679190 localhost minikube]
	I0920 17:36:34.890555  245557 provision.go:177] copyRemoteCerts
	I0920 17:36:34.890650  245557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:36:34.890686  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.893735  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.894102  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.894133  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.894315  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:34.894532  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.894715  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:34.894855  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:34.980634  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 17:36:35.005273  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:36:35.029188  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:36:35.052832  245557 provision.go:87] duration metric: took 265.657137ms to configureAuth
	I0920 17:36:35.052876  245557 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:36:35.053063  245557 config.go:182] Loaded profile config "addons-679190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:36:35.053145  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:35.056181  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.056518  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.056559  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.056787  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:35.056985  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.057136  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.057315  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:35.057524  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:35.057740  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:35.057756  245557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:36:35.573462  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:36:35.573493  245557 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:36:35.573502  245557 main.go:141] libmachine: (addons-679190) Calling .GetURL
	I0920 17:36:35.574853  245557 main.go:141] libmachine: (addons-679190) DBG | Using libvirt version 6000000
	I0920 17:36:35.576713  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.577033  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.577063  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.577214  245557 main.go:141] libmachine: Docker is up and running!
	I0920 17:36:35.577231  245557 main.go:141] libmachine: Reticulating splines...
	I0920 17:36:35.577240  245557 client.go:171] duration metric: took 21.079858169s to LocalClient.Create
	I0920 17:36:35.577264  245557 start.go:167] duration metric: took 21.079928938s to libmachine.API.Create "addons-679190"
	I0920 17:36:35.577275  245557 start.go:293] postStartSetup for "addons-679190" (driver="kvm2")
	I0920 17:36:35.577284  245557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:36:35.577302  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:35.577559  245557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:36:35.577583  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:35.579661  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.579997  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.580031  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.580129  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:35.580313  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.580436  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:35.580539  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:35.664189  245557 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:36:35.668353  245557 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:36:35.668386  245557 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 17:36:35.668464  245557 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 17:36:35.668487  245557 start.go:296] duration metric: took 91.20684ms for postStartSetup
	I0920 17:36:35.668527  245557 main.go:141] libmachine: (addons-679190) Calling .GetConfigRaw
	I0920 17:36:35.669134  245557 main.go:141] libmachine: (addons-679190) Calling .GetIP
	I0920 17:36:35.671946  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.672345  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.672368  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.672652  245557 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/config.json ...
	I0920 17:36:35.672885  245557 start.go:128] duration metric: took 21.194903618s to createHost
	I0920 17:36:35.672915  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:35.675216  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.675474  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.675498  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.675604  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:35.675764  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.675940  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.676046  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:35.676204  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:35.676362  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:35.676372  245557 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:36:35.786755  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726853795.758756532
	
	I0920 17:36:35.786780  245557 fix.go:216] guest clock: 1726853795.758756532
	I0920 17:36:35.786799  245557 fix.go:229] Guest: 2024-09-20 17:36:35.758756532 +0000 UTC Remote: 2024-09-20 17:36:35.672900424 +0000 UTC m=+21.305727812 (delta=85.856108ms)
	I0920 17:36:35.786847  245557 fix.go:200] guest clock delta is within tolerance: 85.856108ms
	I0920 17:36:35.786854  245557 start.go:83] releasing machines lock for "addons-679190", held for 21.309019314s
	I0920 17:36:35.786901  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:35.787199  245557 main.go:141] libmachine: (addons-679190) Calling .GetIP
	I0920 17:36:35.790139  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.790527  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.790550  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.790715  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:35.791190  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:35.791390  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:35.791498  245557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:36:35.791545  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:35.791598  245557 ssh_runner.go:195] Run: cat /version.json
	I0920 17:36:35.791651  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:35.794437  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.794670  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.794822  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.794852  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.795016  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:35.795136  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.795161  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.795193  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.795310  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:35.795381  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:35.795460  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.795532  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:35.795596  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:35.795696  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:35.911918  245557 ssh_runner.go:195] Run: systemctl --version
	I0920 17:36:35.917670  245557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:36:36.074996  245557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:36:36.080814  245557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:36:36.080895  245557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:36:36.096152  245557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:36:36.096189  245557 start.go:495] detecting cgroup driver to use...
	I0920 17:36:36.096260  245557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:36:36.113653  245557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:36:36.128855  245557 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:36:36.128933  245557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:36:36.143261  245557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:36:36.157398  245557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:36:36.266690  245557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:36:36.425266  245557 docker.go:233] disabling docker service ...
	I0920 17:36:36.425347  245557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:36:36.446451  245557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:36:36.459829  245557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:36:36.571061  245557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:36:36.683832  245557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:36:36.698810  245557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:36:36.718244  245557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:36:36.718313  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.729705  245557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:36:36.729784  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.741247  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.752134  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.762794  245557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:36:36.773800  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.784266  245557 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.801953  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.812569  245557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:36:36.822394  245557 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:36:36.822468  245557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:36:36.835966  245557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:36:36.845803  245557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:36:36.958625  245557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:36:37.052231  245557 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:36:37.052346  245557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:36:37.057614  245557 start.go:563] Will wait 60s for crictl version
	I0920 17:36:37.057825  245557 ssh_runner.go:195] Run: which crictl
	I0920 17:36:37.061526  245557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:36:37.105824  245557 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:36:37.105959  245557 ssh_runner.go:195] Run: crio --version
	I0920 17:36:37.136539  245557 ssh_runner.go:195] Run: crio --version
	I0920 17:36:37.171796  245557 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:36:37.173345  245557 main.go:141] libmachine: (addons-679190) Calling .GetIP
	I0920 17:36:37.176324  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:37.176764  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:37.176792  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:37.177021  245557 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:36:37.181300  245557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:36:37.194040  245557 kubeadm.go:883] updating cluster {Name:addons-679190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-679190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:36:37.194155  245557 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:36:37.194199  245557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:36:37.225234  245557 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 17:36:37.225302  245557 ssh_runner.go:195] Run: which lz4
	I0920 17:36:37.229191  245557 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 17:36:37.233185  245557 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 17:36:37.233226  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 17:36:38.392285  245557 crio.go:462] duration metric: took 1.163136107s to copy over tarball
	I0920 17:36:38.392376  245557 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 17:36:40.499360  245557 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.106950323s)
	I0920 17:36:40.499391  245557 crio.go:469] duration metric: took 2.107072401s to extract the tarball
	I0920 17:36:40.499401  245557 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 17:36:40.535110  245557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:36:40.583829  245557 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:36:40.583859  245557 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:36:40.583871  245557 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.31.1 crio true true} ...
	I0920 17:36:40.584018  245557 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-679190 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-679190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:36:40.584106  245557 ssh_runner.go:195] Run: crio config
	I0920 17:36:40.641090  245557 cni.go:84] Creating CNI manager for ""
	I0920 17:36:40.641113  245557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 17:36:40.641123  245557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:36:40.641149  245557 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-679190 NodeName:addons-679190 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:36:40.641304  245557 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-679190"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:36:40.641382  245557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:36:40.652528  245557 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:36:40.652607  245557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 17:36:40.663453  245557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 17:36:40.681121  245557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:36:40.698855  245557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0920 17:36:40.717572  245557 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0920 17:36:40.721648  245557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:36:40.733213  245557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:36:40.847265  245557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:36:40.863856  245557 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190 for IP: 192.168.39.158
	I0920 17:36:40.863898  245557 certs.go:194] generating shared ca certs ...
	I0920 17:36:40.863925  245557 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:40.864134  245557 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 17:36:41.007978  245557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt ...
	I0920 17:36:41.008017  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt: {Name:mkbb1e3a51019c4e83406d8748ea8210552ea552 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.008221  245557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key ...
	I0920 17:36:41.008234  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key: {Name:mk2dcada8581decbc501b050c6a03f21e66e112a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.008308  245557 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 17:36:41.129733  245557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt ...
	I0920 17:36:41.129766  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt: {Name:mke04674cac70a8962a647c3804e5e99b455bf6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.129942  245557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key ...
	I0920 17:36:41.129953  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key: {Name:mkb6f1f78834acbea54fe32363e27f933f4228ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.130023  245557 certs.go:256] generating profile certs ...
	I0920 17:36:41.130084  245557 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.key
	I0920 17:36:41.130099  245557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt with IP's: []
	I0920 17:36:41.201155  245557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt ...
	I0920 17:36:41.201188  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: {Name:mk1833d3bbb2c8e05579222e591c1458c577f545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.201349  245557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.key ...
	I0920 17:36:41.201360  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.key: {Name:mkace5ffe93f144a352a62d890af2292b0d676e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.201423  245557 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key.83bf7d9f
	I0920 17:36:41.201440  245557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt.83bf7d9f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.158]
	I0920 17:36:41.370047  245557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt.83bf7d9f ...
	I0920 17:36:41.370080  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt.83bf7d9f: {Name:mkf5b06795843289171f8aec4b7922bbb13be891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.370249  245557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key.83bf7d9f ...
	I0920 17:36:41.370262  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key.83bf7d9f: {Name:mka349d2513fe2d14b9ca6aa0bfa8d7a73378d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.370335  245557 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt.83bf7d9f -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt
	I0920 17:36:41.370407  245557 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key.83bf7d9f -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key
	I0920 17:36:41.370452  245557 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.key
	I0920 17:36:41.370468  245557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.crt with IP's: []
	I0920 17:36:41.587021  245557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.crt ...
	I0920 17:36:41.587061  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.crt: {Name:mkfc4b71c33e958d6677e7223f0b780b75e49b3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.587221  245557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.key ...
	I0920 17:36:41.587234  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.key: {Name:mk3aa7527b80ede87bad50a2915cf2799293254d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.587394  245557 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:36:41.587429  245557 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 17:36:41.587456  245557 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:36:41.587475  245557 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 17:36:41.588059  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:36:41.613973  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:36:41.636373  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:36:41.669307  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 17:36:41.693224  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 17:36:41.716434  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:36:41.739030  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:36:41.761987  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:36:41.785735  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:36:41.808837  245557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:36:41.824917  245557 ssh_runner.go:195] Run: openssl version
	I0920 17:36:41.830533  245557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:36:41.841288  245557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:36:41.845628  245557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:36:41.845706  245557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:36:41.851639  245557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:36:41.864422  245557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:36:41.868781  245557 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:36:41.868858  245557 kubeadm.go:392] StartCluster: {Name:addons-679190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-679190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:36:41.868969  245557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 17:36:41.869033  245557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 17:36:41.908638  245557 cri.go:89] found id: ""
	I0920 17:36:41.908716  245557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:36:41.918913  245557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:36:41.929048  245557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:36:41.939489  245557 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:36:41.939517  245557 kubeadm.go:157] found existing configuration files:
	
	I0920 17:36:41.939604  245557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:36:41.948942  245557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:36:41.949013  245557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:36:41.958442  245557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:36:41.967545  245557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:36:41.967615  245557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:36:41.977594  245557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:36:41.987246  245557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:36:41.987350  245557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:36:41.997309  245557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:36:42.006453  245557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:36:42.006522  245557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:36:42.016044  245557 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 17:36:42.080202  245557 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 17:36:42.080363  245557 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 17:36:42.176051  245557 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 17:36:42.176190  245557 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 17:36:42.176291  245557 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 17:36:42.188037  245557 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 17:36:42.196848  245557 out.go:235]   - Generating certificates and keys ...
	I0920 17:36:42.196960  245557 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 17:36:42.197037  245557 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 17:36:42.434562  245557 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 17:36:42.521395  245557 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 17:36:42.607758  245557 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 17:36:42.669378  245557 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 17:36:42.904167  245557 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 17:36:42.904374  245557 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-679190 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0920 17:36:43.188202  245557 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 17:36:43.188434  245557 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-679190 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0920 17:36:43.287638  245557 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 17:36:43.473845  245557 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 17:36:43.593299  245557 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 17:36:43.593384  245557 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 17:36:43.987222  245557 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 17:36:44.336150  245557 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 17:36:44.457367  245557 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 17:36:44.695860  245557 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 17:36:44.844623  245557 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 17:36:44.845027  245557 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 17:36:44.847431  245557 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 17:36:44.849263  245557 out.go:235]   - Booting up control plane ...
	I0920 17:36:44.849358  245557 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 17:36:44.849439  245557 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 17:36:44.849514  245557 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 17:36:44.866081  245557 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 17:36:44.873618  245557 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 17:36:44.873725  245557 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 17:36:44.992494  245557 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 17:36:44.992682  245557 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 17:36:45.493964  245557 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.923125ms
	I0920 17:36:45.494050  245557 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 17:36:50.993341  245557 kubeadm.go:310] [api-check] The API server is healthy after 5.503314416s
	I0920 17:36:51.014477  245557 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 17:36:51.035005  245557 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 17:36:51.064511  245557 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 17:36:51.064710  245557 kubeadm.go:310] [mark-control-plane] Marking the node addons-679190 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 17:36:51.081848  245557 kubeadm.go:310] [bootstrap-token] Using token: r0jau5.grdtbm10vjda8jxv
	I0920 17:36:51.083289  245557 out.go:235]   - Configuring RBAC rules ...
	I0920 17:36:51.083448  245557 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 17:36:51.089533  245557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 17:36:51.109444  245557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 17:36:51.114960  245557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 17:36:51.119855  245557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 17:36:51.128234  245557 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 17:36:51.412359  245557 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 17:36:51.848915  245557 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 17:36:52.420530  245557 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 17:36:52.421383  245557 kubeadm.go:310] 
	I0920 17:36:52.421451  245557 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 17:36:52.421460  245557 kubeadm.go:310] 
	I0920 17:36:52.421602  245557 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 17:36:52.421621  245557 kubeadm.go:310] 
	I0920 17:36:52.421658  245557 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 17:36:52.421740  245557 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 17:36:52.421795  245557 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 17:36:52.421807  245557 kubeadm.go:310] 
	I0920 17:36:52.421870  245557 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 17:36:52.421881  245557 kubeadm.go:310] 
	I0920 17:36:52.421965  245557 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 17:36:52.421977  245557 kubeadm.go:310] 
	I0920 17:36:52.422055  245557 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 17:36:52.422173  245557 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 17:36:52.422286  245557 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 17:36:52.422300  245557 kubeadm.go:310] 
	I0920 17:36:52.422432  245557 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 17:36:52.422559  245557 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 17:36:52.422572  245557 kubeadm.go:310] 
	I0920 17:36:52.422674  245557 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r0jau5.grdtbm10vjda8jxv \
	I0920 17:36:52.422819  245557 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 17:36:52.422877  245557 kubeadm.go:310] 	--control-plane 
	I0920 17:36:52.422886  245557 kubeadm.go:310] 
	I0920 17:36:52.422961  245557 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 17:36:52.422970  245557 kubeadm.go:310] 
	I0920 17:36:52.423049  245557 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r0jau5.grdtbm10vjda8jxv \
	I0920 17:36:52.423143  245557 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 17:36:52.423925  245557 kubeadm.go:310] W0920 17:36:42.058037     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:36:52.424286  245557 kubeadm.go:310] W0920 17:36:42.059124     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:36:52.424412  245557 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:36:52.424454  245557 cni.go:84] Creating CNI manager for ""
	I0920 17:36:52.424467  245557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 17:36:52.426470  245557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 17:36:52.427945  245557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 17:36:52.438400  245557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 17:36:52.456765  245557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 17:36:52.456859  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:52.456882  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-679190 minikube.k8s.io/updated_at=2024_09_20T17_36_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=addons-679190 minikube.k8s.io/primary=true
	I0920 17:36:52.484735  245557 ops.go:34] apiserver oom_adj: -16
	I0920 17:36:52.608325  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:53.108755  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:53.609368  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:54.109047  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:54.608496  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:55.109057  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:55.608759  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:56.108486  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:56.176721  245557 kubeadm.go:1113] duration metric: took 3.719930405s to wait for elevateKubeSystemPrivileges
	I0920 17:36:56.176772  245557 kubeadm.go:394] duration metric: took 14.307920068s to StartCluster
	I0920 17:36:56.176799  245557 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:56.176943  245557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:36:56.177302  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:56.177559  245557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 17:36:56.177585  245557 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:36:56.177698  245557 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 17:36:56.177839  245557 addons.go:69] Setting yakd=true in profile "addons-679190"
	I0920 17:36:56.177853  245557 config.go:182] Loaded profile config "addons-679190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:36:56.177868  245557 addons.go:69] Setting metrics-server=true in profile "addons-679190"
	I0920 17:36:56.177883  245557 addons.go:234] Setting addon metrics-server=true in "addons-679190"
	I0920 17:36:56.177861  245557 addons.go:69] Setting inspektor-gadget=true in profile "addons-679190"
	I0920 17:36:56.177860  245557 addons.go:234] Setting addon yakd=true in "addons-679190"
	I0920 17:36:56.177925  245557 addons.go:234] Setting addon inspektor-gadget=true in "addons-679190"
	I0920 17:36:56.177941  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.177951  245557 addons.go:69] Setting registry=true in profile "addons-679190"
	I0920 17:36:56.177965  245557 addons.go:234] Setting addon registry=true in "addons-679190"
	I0920 17:36:56.177977  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.177987  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.177981  245557 addons.go:69] Setting default-storageclass=true in profile "addons-679190"
	I0920 17:36:56.178004  245557 addons.go:69] Setting storage-provisioner=true in profile "addons-679190"
	I0920 17:36:56.178010  245557 addons.go:69] Setting gcp-auth=true in profile "addons-679190"
	I0920 17:36:56.178020  245557 addons.go:234] Setting addon storage-provisioner=true in "addons-679190"
	I0920 17:36:56.178034  245557 addons.go:69] Setting cloud-spanner=true in profile "addons-679190"
	I0920 17:36:56.178041  245557 mustload.go:65] Loading cluster: addons-679190
	I0920 17:36:56.178050  245557 addons.go:234] Setting addon cloud-spanner=true in "addons-679190"
	I0920 17:36:56.178062  245557 addons.go:69] Setting volcano=true in profile "addons-679190"
	I0920 17:36:56.178075  245557 addons.go:234] Setting addon volcano=true in "addons-679190"
	I0920 17:36:56.178081  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178094  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178175  245557 addons.go:69] Setting ingress-dns=true in profile "addons-679190"
	I0920 17:36:56.178199  245557 addons.go:234] Setting addon ingress-dns=true in "addons-679190"
	I0920 17:36:56.178209  245557 config.go:182] Loaded profile config "addons-679190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:36:56.178245  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.177943  245557 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-679190"
	I0920 17:36:56.178273  245557 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-679190"
	I0920 17:36:56.178311  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178483  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178495  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178513  245557 addons.go:69] Setting volumesnapshots=true in profile "addons-679190"
	I0920 17:36:56.178526  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178532  245557 addons.go:234] Setting addon volumesnapshots=true in "addons-679190"
	I0920 17:36:56.178545  245557 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-679190"
	I0920 17:36:56.178483  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178588  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178533  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178643  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.177984  245557 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-679190"
	I0920 17:36:56.178689  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178694  245557 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-679190"
	I0920 17:36:56.178699  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178709  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178728  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178810  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178879  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178557  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.179064  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178020  245557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-679190"
	I0920 17:36:56.179099  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178495  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.179247  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.177991  245557 addons.go:69] Setting ingress=true in profile "addons-679190"
	I0920 17:36:56.179277  245557 addons.go:234] Setting addon ingress=true in "addons-679190"
	I0920 17:36:56.179294  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.179319  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.177934  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178678  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.179597  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178052  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178588  245557 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-679190"
	I0920 17:36:56.180168  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.180484  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.180521  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.180557  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.180594  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.180778  245557 out.go:177] * Verifying Kubernetes components...
	I0920 17:36:56.182381  245557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:36:56.198770  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38321
	I0920 17:36:56.199048  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0920 17:36:56.199209  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34951
	I0920 17:36:56.199459  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.199673  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.199690  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.199783  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36793
	I0920 17:36:56.199983  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.200000  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.200323  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.200418  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.200444  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.200491  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.200938  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.200956  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.201021  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.201085  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.201102  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.201191  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.201335  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.201376  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.201425  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.201697  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.201767  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I0920 17:36:56.202320  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.202366  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.202504  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.202548  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.202622  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.203097  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.203117  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.203410  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.203946  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.203983  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.206753  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.206798  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.207297  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.207335  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.208443  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.208479  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.211454  245557 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-679190"
	I0920 17:36:56.211505  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.211883  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.211928  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.216373  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0920 17:36:56.216884  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.217519  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.217551  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.217867  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.218517  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.218557  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.220437  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I0920 17:36:56.220844  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.221299  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.221320  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.221697  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.222270  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.222325  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.232194  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0920 17:36:56.232900  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.233545  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.233577  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.233996  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.234205  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.235969  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.236411  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.236459  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.241821  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0920 17:36:56.242381  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.242943  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.242972  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.243334  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.243565  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.246291  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0920 17:36:56.246829  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.247399  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.247419  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.247811  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.248026  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I0920 17:36:56.248056  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.248707  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37839
	I0920 17:36:56.249331  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.249815  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.249832  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.250336  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.250958  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.251000  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.251218  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45981
	I0920 17:36:56.251950  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.252204  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I0920 17:36:56.254426  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.255080  245557 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 17:36:56.256399  245557 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:36:56.256419  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 17:36:56.256441  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.256553  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38415
	I0920 17:36:56.257740  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0920 17:36:56.257771  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0920 17:36:56.257868  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.257981  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.258005  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.258066  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.258116  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.258582  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.258601  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.258760  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.258780  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.258948  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.258963  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.259056  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.259091  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.259220  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.259239  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.259287  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.259455  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.259471  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.259756  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.259781  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.259825  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.259847  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46059
	I0920 17:36:56.259938  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.259979  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.260044  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.260089  245557 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 17:36:56.260188  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.260211  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.260684  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.260693  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.261178  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.261337  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.261375  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.261393  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.261456  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.261758  245557 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 17:36:56.261779  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 17:36:56.261799  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.262451  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.263727  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.263750  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.264542  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.264766  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.265197  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.265267  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.265425  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.265871  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.266255  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.266942  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.266966  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.267051  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.267450  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.267450  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.267501  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.267624  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.267627  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.267797  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.267885  245557 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 17:36:56.267955  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.268130  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.268477  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.268931  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:36:56.268949  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:36:56.269155  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.269223  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:36:56.269254  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:36:56.269261  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:36:56.269269  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:36:56.269276  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:36:56.270959  245557 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 17:36:56.272219  245557 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 17:36:56.272238  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 17:36:56.272258  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.272345  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:36:56.272373  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:36:56.272384  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 17:36:56.272473  245557 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 17:36:56.272957  245557 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 17:36:56.274416  245557 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 17:36:56.274435  245557 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 17:36:56.274458  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.278501  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.278771  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.278948  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.278966  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.279120  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.279308  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.279334  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.279356  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.279460  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.279524  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.279789  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.279989  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.280167  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.280659  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.283752  245557 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0920 17:36:56.285194  245557 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 17:36:56.285213  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 17:36:56.285236  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.287357  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46099
	I0920 17:36:56.288285  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.288568  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I0920 17:36:56.289501  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.289586  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.289657  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.289686  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.290284  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.290294  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.290306  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.290351  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.290365  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.290723  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.290770  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.290785  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.290986  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.291421  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.291439  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.291683  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.291884  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.292442  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0920 17:36:56.292861  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.293333  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.293359  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.293708  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.294249  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.294285  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.296604  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0920 17:36:56.297114  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.297352  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33903
	I0920 17:36:56.297691  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.297708  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.297880  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.298156  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.300334  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I0920 17:36:56.300338  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.300443  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.300462  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.300908  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.301356  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.301512  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.301528  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.301592  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.301994  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.302179  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.302935  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.304975  245557 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 17:36:56.305417  245557 addons.go:234] Setting addon default-storageclass=true in "addons-679190"
	I0920 17:36:56.305487  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.305884  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.305971  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.306205  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.306459  245557 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 17:36:56.306481  245557 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 17:36:56.306510  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.308001  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 17:36:56.309187  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41931
	I0920 17:36:56.309544  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 17:36:56.309565  245557 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 17:36:56.309594  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.309653  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38235
	I0920 17:36:56.310121  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.310680  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.310703  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.310778  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.311300  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.311383  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.311411  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.311552  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.311837  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.312758  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.312848  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.313699  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.314421  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.314691  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.314716  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.314740  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.314836  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.315154  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.315267  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.315378  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.317370  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.317387  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.317414  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37403
	I0920 17:36:56.317504  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37673
	I0920 17:36:56.317566  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.318020  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.318021  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.318113  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.318435  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.318527  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.318548  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.318565  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.318581  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.318605  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.318834  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35473
	I0920 17:36:56.318913  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.319073  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.319167  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.319300  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.319543  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.319811  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.319835  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.319903  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 17:36:56.320366  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.320592  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.321679  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.321733  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.322512  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.322733  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 17:36:56.323531  245557 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 17:36:56.323540  245557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 17:36:56.324345  245557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:36:56.325100  245557 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 17:36:56.325122  245557 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 17:36:56.325140  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.326093  245557 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:36:56.326112  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:36:56.326130  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.326232  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 17:36:56.326618  245557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:36:56.328288  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 17:36:56.328983  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.329264  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.329496  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.329527  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.329672  245557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:36:56.329714  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.329725  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.329730  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.329856  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.329944  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.330097  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.330102  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.330283  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.330461  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.330467  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.331167  245557 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 17:36:56.331190  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 17:36:56.331208  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.331686  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 17:36:56.333392  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 17:36:56.334233  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I0920 17:36:56.334709  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.334787  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.335248  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.335265  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.335335  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.335354  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.335411  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.335560  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.335619  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.335685  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.335796  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.335834  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.336064  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 17:36:56.337255  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.338333  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 17:36:56.338361  245557 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 17:36:56.338435  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36195
	I0920 17:36:56.338790  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.339334  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.339351  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.339451  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 17:36:56.339481  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 17:36:56.339503  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.339705  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.340127  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.340219  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.342208  245557 out.go:177]   - Using image docker.io/busybox:stable
	I0920 17:36:56.342943  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.343386  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.343420  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.343599  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.343794  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.343962  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.344071  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.344485  245557 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:36:56.344506  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 17:36:56.344523  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.347682  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.348119  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.348141  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.348318  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.348498  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.348620  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.348730  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	W0920 17:36:56.350081  245557 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37614->192.168.39.158:22: read: connection reset by peer
	I0920 17:36:56.350113  245557 retry.go:31] will retry after 277.419822ms: ssh: handshake failed: read tcp 192.168.39.1:37614->192.168.39.158:22: read: connection reset by peer
	I0920 17:36:56.358579  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
	I0920 17:36:56.359069  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.359542  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.359571  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.359910  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.360078  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.361619  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.361824  245557 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:36:56.361842  245557 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:36:56.361860  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.364857  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.365235  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.365271  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.365432  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.365644  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.365803  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.365981  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	W0920 17:36:56.368948  245557 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37630->192.168.39.158:22: read: connection reset by peer
	I0920 17:36:56.368974  245557 retry.go:31] will retry after 189.220194ms: ssh: handshake failed: read tcp 192.168.39.1:37630->192.168.39.158:22: read: connection reset by peer
	I0920 17:36:56.558562  245557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:36:56.558883  245557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:36:56.674915  245557 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 17:36:56.674949  245557 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 17:36:56.736424  245557 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 17:36:56.736462  245557 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 17:36:56.738918  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 17:36:56.740403  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:36:56.779127  245557 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 17:36:56.779166  245557 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 17:36:56.785790  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:36:56.816546  245557 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 17:36:56.816572  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 17:36:56.818607  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 17:36:56.833977  245557 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 17:36:56.834015  245557 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 17:36:56.925219  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:36:56.958576  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 17:36:57.000786  245557 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:36:57.000810  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 17:36:57.009273  245557 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 17:36:57.009317  245557 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 17:36:57.024743  245557 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 17:36:57.024770  245557 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 17:36:57.043071  245557 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 17:36:57.043099  245557 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 17:36:57.092905  245557 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 17:36:57.092942  245557 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 17:36:57.127201  245557 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 17:36:57.127236  245557 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 17:36:57.158480  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:36:57.178499  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:36:57.215557  245557 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 17:36:57.215592  245557 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 17:36:57.238838  245557 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 17:36:57.238870  245557 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 17:36:57.247948  245557 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:36:57.247973  245557 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 17:36:57.272793  245557 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 17:36:57.272831  245557 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 17:36:57.292572  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 17:36:57.292600  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 17:36:57.414471  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 17:36:57.414500  245557 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 17:36:57.441852  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:36:57.459384  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 17:36:57.459417  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 17:36:57.459605  245557 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:36:57.459635  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 17:36:57.487179  245557 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 17:36:57.487211  245557 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 17:36:57.600664  245557 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:36:57.600691  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 17:36:57.665586  245557 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 17:36:57.665618  245557 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 17:36:57.669993  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:36:57.692412  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 17:36:57.692454  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 17:36:57.777267  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:36:57.878278  245557 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:36:57.878309  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 17:36:57.884855  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 17:36:57.884886  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 17:36:57.939139  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:36:58.166138  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 17:36:58.166167  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 17:36:58.648447  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 17:36:58.648486  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 17:36:58.703324  245557 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.144399607s)
	I0920 17:36:58.703358  245557 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.144764949s)
	I0920 17:36:58.703371  245557 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 17:36:58.704166  245557 node_ready.go:35] waiting up to 6m0s for node "addons-679190" to be "Ready" ...
	I0920 17:36:58.710465  245557 node_ready.go:49] node "addons-679190" has status "Ready":"True"
	I0920 17:36:58.710493  245557 node_ready.go:38] duration metric: took 6.288327ms for node "addons-679190" to be "Ready" ...
	I0920 17:36:58.710503  245557 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:36:58.723116  245557 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace to be "Ready" ...
	I0920 17:36:59.028902  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 17:36:59.028955  245557 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 17:36:59.192793  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 17:36:59.192824  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 17:36:59.212433  245557 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-679190" context rescaled to 1 replicas
	I0920 17:36:59.496357  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 17:36:59.496454  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 17:36:59.712928  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:36:59.712961  245557 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 17:37:00.075273  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:37:00.759434  245557 pod_ready.go:103] pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:02.878793  245557 pod_ready.go:103] pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:03.327281  245557 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 17:37:03.327326  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:37:03.331115  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:37:03.331744  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:37:03.331780  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:37:03.332019  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:37:03.332249  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:37:03.332520  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:37:03.332731  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:37:03.544424  245557 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 17:37:03.611249  245557 addons.go:234] Setting addon gcp-auth=true in "addons-679190"
	I0920 17:37:03.611311  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:37:03.611651  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:37:03.611695  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:37:03.627843  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35373
	I0920 17:37:03.628403  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:37:03.628939  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:37:03.628963  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:37:03.629370  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:37:03.629868  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:37:03.629917  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:37:03.647166  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I0920 17:37:03.647674  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:37:03.648222  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:37:03.648244  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:37:03.648605  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:37:03.648924  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:37:03.650642  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:37:03.650881  245557 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 17:37:03.650914  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:37:03.653472  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:37:03.653934  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:37:03.653975  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:37:03.654165  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:37:03.654379  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:37:03.654559  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:37:03.654756  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:37:04.200814  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.461859921s)
	I0920 17:37:04.200874  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.200887  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.200907  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.460468965s)
	I0920 17:37:04.200955  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.200972  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201021  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.41520737s)
	I0920 17:37:04.201047  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201055  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201068  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.382435723s)
	I0920 17:37:04.201090  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201101  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201151  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.275909746s)
	I0920 17:37:04.201167  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201168  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.242568548s)
	I0920 17:37:04.201174  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201183  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201191  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201230  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.042717172s)
	I0920 17:37:04.201247  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201255  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201259  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.022724165s)
	I0920 17:37:04.201276  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201286  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201348  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.759469305s)
	I0920 17:37:04.201367  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201375  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201450  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.5314256s)
	I0920 17:37:04.201467  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201476  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201559  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.201567  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.4242534s)
	I0920 17:37:04.201598  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	W0920 17:37:04.201608  245557 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:37:04.201637  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.201647  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.201647  245557 retry.go:31] will retry after 372.12607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:37:04.201655  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201665  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201575  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.201725  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201731  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201734  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.262556719s)
	I0920 17:37:04.201759  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201768  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201842  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.201915  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.201932  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.201952  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.201961  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.201965  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.201970  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.201977  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201983  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.202041  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.202065  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.202070  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.202077  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.202082  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.202642  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.202677  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.202684  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.202691  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.202698  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.204814  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.204838  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.204848  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.204856  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.204860  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.204874  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.204885  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.204897  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.204919  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.204946  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.204956  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.204978  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.205011  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.205017  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.205297  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.205350  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.205649  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.205667  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.205737  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.205747  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.205816  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.205822  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.205830  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.205837  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.205890  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.205929  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.205935  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.205943  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.205949  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.207122  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207136  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207181  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207197  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207230  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.207298  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207306  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207316  245557 addons.go:475] Verifying addon metrics-server=true in "addons-679190"
	I0920 17:37:04.207442  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.207454  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.207468  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207476  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207476  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207484  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207485  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.207492  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.207513  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207530  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207619  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.207638  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.207746  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.207768  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207775  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207784  245557 addons.go:475] Verifying addon registry=true in "addons-679190"
	I0920 17:37:04.209273  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.209289  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.209299  245557 addons.go:475] Verifying addon ingress=true in "addons-679190"
	I0920 17:37:04.210060  245557 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-679190 service yakd-dashboard -n yakd-dashboard
	
	I0920 17:37:04.210094  245557 out.go:177] * Verifying registry addon...
	I0920 17:37:04.211026  245557 out.go:177] * Verifying ingress addon...
	I0920 17:37:04.214177  245557 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 17:37:04.214180  245557 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 17:37:04.219984  245557 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 17:37:04.220012  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:04.232040  245557 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 17:37:04.232063  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:04.250786  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.250821  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.251111  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.251130  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 17:37:04.251227  245557 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 17:37:04.260835  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.260869  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.261164  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.261183  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.574205  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:37:04.725222  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:04.729585  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:05.249467  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:05.249466  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:05.283804  245557 pod_ready.go:103] pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:05.471390  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.396046848s)
	I0920 17:37:05.471473  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:05.471495  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:05.471416  245557 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.820510198s)
	I0920 17:37:05.471936  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:05.471953  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:05.471964  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:05.471971  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:05.472409  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:05.472432  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:05.472435  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:05.472454  245557 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-679190"
	I0920 17:37:05.473667  245557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:37:05.474639  245557 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 17:37:05.476343  245557 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 17:37:05.477417  245557 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 17:37:05.477751  245557 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 17:37:05.477771  245557 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 17:37:05.501716  245557 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 17:37:05.501756  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:05.627006  245557 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 17:37:05.627047  245557 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 17:37:05.723361  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:05.729929  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:05.777291  245557 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:37:05.777327  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 17:37:05.793054  245557 pod_ready.go:93] pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:05.793083  245557 pod_ready.go:82] duration metric: took 7.069936594s for pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.793096  245557 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jln6k" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.808093  245557 pod_ready.go:93] pod "coredns-7c65d6cfc9-jln6k" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:05.808122  245557 pod_ready.go:82] duration metric: took 15.016714ms for pod "coredns-7c65d6cfc9-jln6k" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.808135  245557 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.815411  245557 pod_ready.go:93] pod "etcd-addons-679190" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:05.815439  245557 pod_ready.go:82] duration metric: took 7.295923ms for pod "etcd-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.815451  245557 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.826707  245557 pod_ready.go:93] pod "kube-apiserver-addons-679190" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:05.826733  245557 pod_ready.go:82] duration metric: took 11.271544ms for pod "kube-apiserver-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.826746  245557 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.832864  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:37:05.843767  245557 pod_ready.go:93] pod "kube-controller-manager-addons-679190" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:05.843804  245557 pod_ready.go:82] duration metric: took 17.048824ms for pod "kube-controller-manager-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.843818  245557 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-klvxz" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.983081  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:06.137818  245557 pod_ready.go:93] pod "kube-proxy-klvxz" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:06.137858  245557 pod_ready.go:82] duration metric: took 294.032966ms for pod "kube-proxy-klvxz" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:06.137870  245557 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:06.226275  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:06.226546  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:06.672283  245557 pod_ready.go:93] pod "kube-scheduler-addons-679190" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:06.672311  245557 pod_ready.go:82] duration metric: took 534.434193ms for pod "kube-scheduler-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:06.672322  245557 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:06.676924  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:06.723323  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:06.723483  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:06.996072  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.421807501s)
	I0920 17:37:06.996136  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:06.996154  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:06.996393  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:06.996417  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:06.996426  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:06.996434  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:06.996451  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:06.996683  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:06.996693  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:06.996709  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:07.016129  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:07.083780  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.250840857s)
	I0920 17:37:07.083874  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:07.083897  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:07.084188  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:07.084212  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:07.084223  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:07.084231  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:07.084473  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:07.084497  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:07.084529  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:07.086428  245557 addons.go:475] Verifying addon gcp-auth=true in "addons-679190"
	I0920 17:37:07.089016  245557 out.go:177] * Verifying gcp-auth addon...
	I0920 17:37:07.091518  245557 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 17:37:07.134782  245557 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 17:37:07.134813  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:07.235817  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:07.236611  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:07.488734  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:07.595170  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:07.721622  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:07.723013  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:07.986730  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:08.097723  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:08.235499  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:08.236709  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:08.484933  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:08.595349  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:08.679620  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:08.720021  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:08.720047  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:08.981981  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:09.095366  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:09.219800  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:09.220283  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:09.482502  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:09.596169  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:09.718911  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:09.719167  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:09.981992  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:10.095430  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:10.218531  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:10.218998  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:10.482432  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:10.597590  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:10.965954  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:10.966262  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:11.066406  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:11.095561  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:11.179050  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:11.219287  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:11.219338  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:11.482288  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:11.595743  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:11.718737  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:11.720102  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:11.983121  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:12.096110  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:12.218665  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:12.219012  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:12.481993  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:12.595323  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:12.719728  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:12.719803  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:12.983136  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:13.095240  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:13.219271  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:13.220586  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:13.482367  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:13.594980  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:13.679290  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:13.719607  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:13.719858  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:13.982406  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:14.096116  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:14.218143  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:14.218348  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:14.481878  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:14.595473  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:14.718781  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:14.719599  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:14.983016  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:15.098359  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:15.218446  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:15.219530  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:15.482914  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:15.596240  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:15.680026  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:15.718767  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:15.719239  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:15.982692  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:16.097226  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:16.218654  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:16.219309  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:16.482137  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:16.595706  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:16.719296  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:16.719734  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:16.981704  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:17.095888  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:17.218140  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:17.219734  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:17.481807  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:17.596056  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:17.720484  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:17.720855  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:17.982237  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:18.095526  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:18.179237  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:18.219858  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:18.220498  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:18.482532  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:18.595184  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:18.719021  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:18.719806  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:18.982493  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:19.096098  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:19.218547  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:19.219496  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:19.482320  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:19.595193  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:19.719166  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:19.720319  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:19.982325  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:20.367324  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:20.367356  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:20.367706  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:20.370491  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:20.482782  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:20.595136  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:20.718920  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:20.719192  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:20.981947  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:21.095534  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:21.218869  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:21.219583  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:21.482550  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:21.595874  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:21.719021  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:21.719268  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:21.982430  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:22.095030  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:22.219384  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:22.219891  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:22.482224  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:22.595405  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:22.679943  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:22.718958  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:22.719141  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:22.982733  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:23.096227  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:23.219788  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:23.220067  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:23.483912  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:23.595388  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:23.718637  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:23.719016  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:23.982130  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:24.095662  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:24.218721  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:24.219057  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:24.482535  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:24.595793  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:24.678781  245557 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:24.678812  245557 pod_ready.go:82] duration metric: took 18.006481882s for pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:24.678822  245557 pod_ready.go:39] duration metric: took 25.968303705s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:37:24.678872  245557 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:37:24.678948  245557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:37:24.702218  245557 api_server.go:72] duration metric: took 28.524587153s to wait for apiserver process to appear ...
	I0920 17:37:24.702254  245557 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:37:24.702293  245557 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0920 17:37:24.706595  245557 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0920 17:37:24.707660  245557 api_server.go:141] control plane version: v1.31.1
	I0920 17:37:24.707685  245557 api_server.go:131] duration metric: took 5.422585ms to wait for apiserver health ...
	I0920 17:37:24.707694  245557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:37:24.715504  245557 system_pods.go:59] 17 kube-system pods found
	I0920 17:37:24.715541  245557 system_pods.go:61] "coredns-7c65d6cfc9-dsxdk" [3371b6ad-8f6e-4474-a677-f07c0b4e0a38] Running
	I0920 17:37:24.715552  245557 system_pods.go:61] "csi-hostpath-attacher-0" [1630eb87-6fea-4510-8b0d-cb108c179963] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:37:24.715563  245557 system_pods.go:61] "csi-hostpath-resizer-0" [9b98474b-c72c-4230-973c-a76ed4f731c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:37:24.715573  245557 system_pods.go:61] "csi-hostpathplugin-9m9gc" [00f39caf-3478-4abb-922e-28239885d7bf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:37:24.715580  245557 system_pods.go:61] "etcd-addons-679190" [4d3ed97e-c5a9-4017-86dd-68689e55e1f0] Running
	I0920 17:37:24.715586  245557 system_pods.go:61] "kube-apiserver-addons-679190" [24fac84a-44d2-4e96-8680-606874e6b5bb] Running
	I0920 17:37:24.715591  245557 system_pods.go:61] "kube-controller-manager-addons-679190" [5dec7f61-8787-4c49-8f6f-998e2dbc01cb] Running
	I0920 17:37:24.715597  245557 system_pods.go:61] "kube-ingress-dns-minikube" [1a3b7852-a919-4f95-9e5c-20ead0de76ad] Running
	I0920 17:37:24.715603  245557 system_pods.go:61] "kube-proxy-klvxz" [6edcd5de-35eb-4e5b-8073-e2a49428b300] Running
	I0920 17:37:24.715609  245557 system_pods.go:61] "kube-scheduler-addons-679190" [143ec669-777c-495c-80f3-a792643b75e8] Running
	I0920 17:37:24.715619  245557 system_pods.go:61] "metrics-server-84c5f94fbc-fj4mf" [adb63308-9d43-444e-b31b-a5efeef5d323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:37:24.715625  245557 system_pods.go:61] "nvidia-device-plugin-daemonset-b5wj9" [eb9faaf1-05e4-4f88-abbb-479f222d2664] Running
	I0920 17:37:24.715637  245557 system_pods.go:61] "registry-66c9cd494c-7g6lm" [4ad8ab0b-f43b-475a-984c-11d2a23963c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 17:37:24.715646  245557 system_pods.go:61] "registry-proxy-k96rm" [0612b678-15da-44d6-acfb-c29dd8dd2b7d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 17:37:24.715678  245557 system_pods.go:61] "snapshot-controller-56fcc65765-5qmkt" [a4b880c1-bd61-45bb-80cf-ae1dd6af7d4e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:37:24.715689  245557 system_pods.go:61] "snapshot-controller-56fcc65765-cwbl2" [6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:37:24.715696  245557 system_pods.go:61] "storage-provisioner" [339440d6-4355-4e26-a436-2edefb4d7b9d] Running
	I0920 17:37:24.715707  245557 system_pods.go:74] duration metric: took 8.003633ms to wait for pod list to return data ...
	I0920 17:37:24.715719  245557 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:37:24.719187  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:24.719879  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:24.721134  245557 default_sa.go:45] found service account: "default"
	I0920 17:37:24.721158  245557 default_sa.go:55] duration metric: took 5.428135ms for default service account to be created ...
	I0920 17:37:24.721168  245557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:37:24.730946  245557 system_pods.go:86] 17 kube-system pods found
	I0920 17:37:24.730977  245557 system_pods.go:89] "coredns-7c65d6cfc9-dsxdk" [3371b6ad-8f6e-4474-a677-f07c0b4e0a38] Running
	I0920 17:37:24.730988  245557 system_pods.go:89] "csi-hostpath-attacher-0" [1630eb87-6fea-4510-8b0d-cb108c179963] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:37:24.730995  245557 system_pods.go:89] "csi-hostpath-resizer-0" [9b98474b-c72c-4230-973c-a76ed4f731c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:37:24.731005  245557 system_pods.go:89] "csi-hostpathplugin-9m9gc" [00f39caf-3478-4abb-922e-28239885d7bf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:37:24.731009  245557 system_pods.go:89] "etcd-addons-679190" [4d3ed97e-c5a9-4017-86dd-68689e55e1f0] Running
	I0920 17:37:24.731014  245557 system_pods.go:89] "kube-apiserver-addons-679190" [24fac84a-44d2-4e96-8680-606874e6b5bb] Running
	I0920 17:37:24.731017  245557 system_pods.go:89] "kube-controller-manager-addons-679190" [5dec7f61-8787-4c49-8f6f-998e2dbc01cb] Running
	I0920 17:37:24.731021  245557 system_pods.go:89] "kube-ingress-dns-minikube" [1a3b7852-a919-4f95-9e5c-20ead0de76ad] Running
	I0920 17:37:24.731024  245557 system_pods.go:89] "kube-proxy-klvxz" [6edcd5de-35eb-4e5b-8073-e2a49428b300] Running
	I0920 17:37:24.731027  245557 system_pods.go:89] "kube-scheduler-addons-679190" [143ec669-777c-495c-80f3-a792643b75e8] Running
	I0920 17:37:24.731031  245557 system_pods.go:89] "metrics-server-84c5f94fbc-fj4mf" [adb63308-9d43-444e-b31b-a5efeef5d323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:37:24.731036  245557 system_pods.go:89] "nvidia-device-plugin-daemonset-b5wj9" [eb9faaf1-05e4-4f88-abbb-479f222d2664] Running
	I0920 17:37:24.731041  245557 system_pods.go:89] "registry-66c9cd494c-7g6lm" [4ad8ab0b-f43b-475a-984c-11d2a23963c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 17:37:24.731047  245557 system_pods.go:89] "registry-proxy-k96rm" [0612b678-15da-44d6-acfb-c29dd8dd2b7d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 17:37:24.731053  245557 system_pods.go:89] "snapshot-controller-56fcc65765-5qmkt" [a4b880c1-bd61-45bb-80cf-ae1dd6af7d4e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:37:24.731061  245557 system_pods.go:89] "snapshot-controller-56fcc65765-cwbl2" [6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:37:24.731065  245557 system_pods.go:89] "storage-provisioner" [339440d6-4355-4e26-a436-2edefb4d7b9d] Running
	I0920 17:37:24.731073  245557 system_pods.go:126] duration metric: took 9.894741ms to wait for k8s-apps to be running ...
	I0920 17:37:24.731083  245557 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:37:24.731128  245557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:37:24.756246  245557 system_svc.go:56] duration metric: took 25.149435ms WaitForService to wait for kubelet
	I0920 17:37:24.756281  245557 kubeadm.go:582] duration metric: took 28.578660436s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:37:24.756309  245557 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:37:24.759977  245557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:37:24.760008  245557 node_conditions.go:123] node cpu capacity is 2
	I0920 17:37:24.760024  245557 node_conditions.go:105] duration metric: took 3.709037ms to run NodePressure ...
	I0920 17:37:24.760039  245557 start.go:241] waiting for startup goroutines ...
	I0920 17:37:24.982102  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:25.692769  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:25.692898  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:25.693075  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:25.695274  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:25.792088  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:25.792245  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:25.792632  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:25.985189  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:26.096256  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:26.218968  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:26.219398  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:26.483148  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:26.596907  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:26.720942  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:26.723460  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:26.984520  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:27.096894  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:27.220406  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:27.220769  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:27.484078  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:27.595695  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:27.720606  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:27.721639  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:27.982903  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:28.095938  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:28.219987  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:28.220971  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:28.481389  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:28.610426  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:28.719978  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:28.720142  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:28.983078  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:29.095989  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:29.218709  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:29.218930  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:29.482101  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:29.595615  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:29.719088  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:29.719199  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:29.982791  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:30.095599  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:30.217922  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:30.218986  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:30.482436  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:30.595942  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:30.718190  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:30.719931  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:30.981672  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:31.095251  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:31.219963  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:31.221121  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:31.482257  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:31.595251  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:31.720133  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:31.720333  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:31.982198  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:32.096412  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:32.219862  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:32.219953  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:32.482618  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:32.594901  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:32.719007  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:32.719260  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:32.982391  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:33.401207  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:33.401585  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:33.401765  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:33.483109  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:33.596749  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:33.720837  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:33.721022  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:33.982168  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:34.096308  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:34.218951  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:34.219384  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:34.482769  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:34.598370  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:34.720347  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:34.720439  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:34.982322  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:35.095917  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:35.219540  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:35.219941  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:35.487140  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:35.594855  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:35.718904  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:35.720766  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:35.982409  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:36.095826  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:36.219811  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:36.220538  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:36.482003  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:36.594821  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:36.719933  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:36.720068  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:36.981755  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:37.095191  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:37.219188  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:37.219358  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:37.659063  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:37.661446  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:37.720179  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:37.721456  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:37.982458  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:38.094851  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:38.218310  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:38.220085  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:38.483023  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:38.594895  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:38.722426  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:38.725839  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:38.982505  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:39.098447  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:39.218837  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:39.218838  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:39.481811  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:39.595792  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:39.718814  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:39.719320  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:39.982909  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:40.095985  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:40.218489  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:40.219278  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:40.481967  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:40.595737  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:40.718910  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:40.719238  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:40.983283  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:41.095940  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:41.219565  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:41.221013  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:41.482435  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:41.595334  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:41.720647  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:41.720684  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:41.983153  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:42.094768  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:42.220086  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:42.220394  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:42.482518  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:42.595381  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:42.720206  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:42.720419  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:42.983132  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:43.095176  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:43.219302  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:43.219642  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:43.482642  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:43.595409  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:43.721373  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:43.721632  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:43.982605  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:44.098347  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:44.219089  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:44.221006  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:44.482252  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:44.595145  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:44.719105  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:44.719297  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:44.982659  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:45.095138  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:45.219384  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:45.220254  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:45.482221  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:45.595501  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:45.718470  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:45.719300  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:45.982785  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:46.095155  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:46.219224  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:46.219462  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:46.483020  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:46.595174  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:46.719798  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:46.720503  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:46.983110  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:47.095113  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:47.219161  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:47.219467  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:47.482526  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:47.596127  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:47.719708  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:47.722386  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:47.983136  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:48.095553  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:48.219791  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:48.220277  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:48.482138  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:48.595891  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:48.719594  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:48.719903  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:48.983736  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:49.095768  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:49.218264  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:49.218364  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:49.482328  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:49.594924  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:49.720709  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:49.721038  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:49.984147  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:50.095908  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:50.218641  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:50.219557  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:50.482216  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:50.595636  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:50.718073  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:50.718453  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:50.982477  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:51.096470  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:51.218737  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:51.219071  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:51.482552  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:51.594846  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:51.719591  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:51.719929  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:51.982403  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:52.094835  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:52.219094  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:52.219308  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:52.505584  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:52.595700  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:52.722416  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:52.722934  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:52.982125  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:53.095783  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:53.219934  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:53.220724  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:53.481962  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:53.595411  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:53.719033  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:53.719514  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:53.983809  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:54.104151  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:54.218336  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:54.220411  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:54.483652  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:54.596251  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:54.719528  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:54.720220  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:55.232368  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:55.232724  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:55.233181  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:55.233391  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:55.481929  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:55.595861  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:55.718543  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:55.718985  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:55.983911  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:56.095903  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:56.220860  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:56.221898  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:56.482778  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:56.595842  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:56.718847  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:56.719100  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:56.982103  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:57.095564  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:57.218351  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:57.218561  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:57.482555  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:57.595003  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:57.719076  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:57.719278  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:57.983394  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:58.095258  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:58.218602  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:58.219149  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:58.482754  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:58.595355  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:58.719161  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:58.719321  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:58.981879  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:59.095291  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:59.219381  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:59.220178  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:59.482616  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:59.596295  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:59.719294  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:59.719426  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:59.993620  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:00.096081  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:00.219485  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:38:00.219841  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:00.482663  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:00.595396  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:00.720126  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:38:00.720694  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:00.992204  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:01.097761  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:01.219498  245557 kapi.go:107] duration metric: took 57.005316247s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 17:38:01.220075  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:01.484002  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:01.595128  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:01.719104  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:02.136461  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:02.137748  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:02.239284  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:02.484059  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:02.597924  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:02.718800  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:02.982988  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:03.095774  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:03.226940  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:03.482947  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:03.595687  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:03.718635  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:03.982370  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:04.102128  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:04.219301  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:04.483127  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:04.595106  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:04.719141  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:04.981631  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:05.101287  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:05.219055  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:05.482258  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:05.595242  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:05.718751  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:05.982406  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:06.106689  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:06.221343  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:06.482771  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:06.594811  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:06.719092  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:06.981985  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:07.097061  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:07.219541  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:07.483408  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:07.595174  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:07.719181  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:07.982038  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:08.095412  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:08.220499  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:08.484258  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:08.595950  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:08.718848  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:08.983659  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:09.095507  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:09.223029  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:09.486835  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:09.599413  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:09.719800  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:09.982147  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:10.095511  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:10.669481  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:10.669715  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:10.669778  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:10.753003  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:10.984155  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:11.096392  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:11.226061  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:11.482229  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:11.595481  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:11.719332  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:11.982116  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:12.095541  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:12.221657  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:12.482601  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:12.595013  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:12.731224  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:12.982914  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:13.095203  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:13.220342  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:13.483110  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:13.598709  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:13.718995  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:13.983441  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:14.094805  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:14.225305  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:14.482669  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:14.596239  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:14.720831  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:14.982905  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:15.095677  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:15.218902  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:15.482688  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:15.595271  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:15.752797  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:15.982814  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:16.095989  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:16.218789  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:16.482153  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:16.595532  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:16.718428  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:16.982631  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:17.095161  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:17.218895  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:17.482903  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:17.595571  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:17.720206  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:17.981706  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:18.095526  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:18.221214  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:18.483135  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:18.595497  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:18.723739  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:18.983544  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:19.096959  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:19.218551  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:19.482638  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:19.595067  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:19.718890  245557 kapi.go:107] duration metric: took 1m15.504703683s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 17:38:19.986184  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:20.098494  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:20.482419  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:20.595350  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:20.984285  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:21.095405  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:21.482801  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:21.595705  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:21.982482  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:22.095811  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:22.482263  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:22.595985  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:22.983166  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:23.095955  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:23.482802  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:23.607423  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:23.983139  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:24.095964  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:24.482710  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:24.595867  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:24.982831  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:25.095296  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:25.485376  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:25.602264  245557 kapi.go:107] duration metric: took 1m18.510746029s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 17:38:25.604077  245557 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-679190 cluster.
	I0920 17:38:25.605455  245557 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 17:38:25.607126  245557 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 17:38:25.983952  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:26.489199  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:26.984440  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:27.486001  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:27.982356  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:28.481673  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:28.985677  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:29.483232  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:29.981588  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:30.486914  245557 kapi.go:107] duration metric: took 1m25.009495563s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 17:38:30.489426  245557 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, inspektor-gadget, nvidia-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0920 17:38:30.491035  245557 addons.go:510] duration metric: took 1m34.313356496s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner inspektor-gadget nvidia-device-plugin metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0920 17:38:30.491107  245557 start.go:246] waiting for cluster config update ...
	I0920 17:38:30.491135  245557 start.go:255] writing updated cluster config ...
	I0920 17:38:30.491469  245557 ssh_runner.go:195] Run: rm -f paused
	I0920 17:38:30.547139  245557 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 17:38:30.549058  245557 out.go:177] * Done! kubectl is now configured to use "addons-679190" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.703101563Z" level=info msg="Removing container: 88333eb2bd8520bc6d9f2ec01e1b00cf091952a98ce6b22d632ecbbae8ba9b7f" file="server/container_remove.go:24" id=c2fd0afe-a14b-47e2-8c6b-9d84831ade73 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.714542772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 4ad8ab0b-f43b-475a-984c-11d2a23963c0,},},}" file="otel-collector/interceptors.go:62" id=7b086096-d591-45e2-8e08-42f53d4d7f46 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.714628858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b086096-d591-45e2-8e08-42f53d4d7f46 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.714687061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7b086096-d591-45e2-8e08-42f53d4d7f46 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:46 addons-679190 conmon[4523]: conmon c0083fc45bb0bca19818 <ninfo>: container 4535 exited with status 2
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.726516525Z" level=debug msg="Event: WRITE         \"/var/run/crio/exits/c0083fc45bb0bca1981820f7ca4d5a2c77beaea5bfc30f23f17e9287f8b8c14e.NLT4T2\"" file="server/server.go:805"
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.726606077Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/c0083fc45bb0bca1981820f7ca4d5a2c77beaea5bfc30f23f17e9287f8b8c14e.NLT4T2\"" file="server/server.go:805"
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.726663071Z" level=debug msg="Container or sandbox exited: c0083fc45bb0bca1981820f7ca4d5a2c77beaea5bfc30f23f17e9287f8b8c14e.NLT4T2" file="server/server.go:810"
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.726995013Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/c0083fc45bb0bca1981820f7ca4d5a2c77beaea5bfc30f23f17e9287f8b8c14e\"" file="server/server.go:805"
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.729741290Z" level=debug msg="Container or sandbox exited: c0083fc45bb0bca1981820f7ca4d5a2c77beaea5bfc30f23f17e9287f8b8c14e" file="server/server.go:810"
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.729768435Z" level=debug msg="container exited and found: c0083fc45bb0bca1981820f7ca4d5a2c77beaea5bfc30f23f17e9287f8b8c14e" file="server/server.go:825"
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.727002542Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/c0083fc45bb0bca1981820f7ca4d5a2c77beaea5bfc30f23f17e9287f8b8c14e.NLT4T2\"" file="server/server.go:805"
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.735113911Z" level=debug msg="Unmounted container 88333eb2bd8520bc6d9f2ec01e1b00cf091952a98ce6b22d632ecbbae8ba9b7f" file="storage/runtime.go:495" id=c2fd0afe-a14b-47e2-8c6b-9d84831ade73 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.740603625Z" level=info msg="Removed container 88333eb2bd8520bc6d9f2ec01e1b00cf091952a98ce6b22d632ecbbae8ba9b7f: kube-system/registry-proxy-k96rm/registry-proxy" file="server/container_remove.go:40" id=c2fd0afe-a14b-47e2-8c6b-9d84831ade73 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.740711275Z" level=debug msg="Response: &RemoveContainerResponse{}" file="otel-collector/interceptors.go:74" id=c2fd0afe-a14b-47e2-8c6b-9d84831ade73 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.765826592Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b70d2344-63b1-4786-b12c-27f4604cdc7e name=/runtime.v1.RuntimeService/Version
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.765955605Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b70d2344-63b1-4786-b12c-27f4604cdc7e name=/runtime.v1.RuntimeService/Version
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.767217869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6b144f5-6eef-441f-8662-717806726c51 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.768443427Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854466768417087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:550632,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6b144f5-6eef-441f-8662-717806726c51 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.768995393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3f9bc4e-e38e-4436-a318-0bf098be0ea3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.769053371Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3f9bc4e-e38e-4436-a318-0bf098be0ea3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.769469636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19b33ea2ca1e95f3cf6352959a33b4097f8b4afb2a99285d44c77b250f277153,PodSandboxId:baa7b5cea9fa13bed223540120ceec73698806159aa49c33bd266d18b3ec5d0b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726854442139284452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 719ce5c1-7853-4fc9-8fd3-7725aba7ed0c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0b4062645fc2c1c8c82cdce410360489fe6600fb411ea6d60c712a5c12813f,PodSandboxId:5339d36289fab846d220fa9edfa1af3bbc0ffda6cd68845caeceb1aa176d74b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726853904701573245,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-58447,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6925f58e-54c8-43f8-893e-4ff8a6a84707,},Annotations:map[string]string{io.kubernetes.container.has
h: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eaa551f6ba5a1855d1ed04c948c80937d8b2f668c643d6274f92360ad804f83,PodSandboxId:560439494ad105d229c37a2ac95f258671d2e3b1610c2fa30f4ee5b05bf87f2f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726853898201298020,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-rvfrg,io.kubernetes.pod.namespace: ingress-nginx
,io.kubernetes.pod.uid: 67a3fcb3-ce83-4bf8-a804-16a31e6d5da4,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3825f7af126e587a79dfe6d3c64f647f4ccb761c8df843dbb144c54906de5bed,PodSandboxId:18386a0016b7ddae0aedd789d358584cdada2d1f8b42771e39fcc4d6cdf1aacf,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726853883039393792,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-44nll,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3ec98204-6d97-4ac3-a7a9-d53c47f3ab50,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23688c0ebf707d13467884ca61445010331f6a8ec609486dddc1e941625565b6,PodSandboxId:697ab83fd20d470579c5d40bd8e40020a2bc863a8ec4285b53adf170408a43d0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726853882399572406,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-85mv4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a62c0948-1d28-462f-9fa5-104e567a74d7,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a47d5ced11867df2542b7b98427596e52c80e9913f8ceab9aa2748b2aed2300,PodSandboxId:1e7aef814dca96cb514f8adb189bb3b15bb2072a02e62607acbfe178eb6bcac4,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2
b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726853880026848792,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-cwbl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0083fc45bb0bca1981820f7ca4d5a2c77beaea5bfc30f23f17e9287f8b8c14e,PodSandboxId:54b5b96a2e7081b03c82b1729fa337b879d2c984e10f25f4583db7aa0b45a18b,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{I
mage:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726853869725963944,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-5qmkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b880c1-bd61-45bb-80cf-ae1dd6af7d4e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f37f4284c136c5a93b735b37ed0979ddbd084e8586efb69fd840601eea6e9b2d,PodSandboxId:0e75080753c376351238a32172857b9850af59e0803ed5f26aa7a13beb06c7fe,Metadata:&Contai
nerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726853848562042456,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-vcxc2,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 7fc94b7c-a858-4af7-9355-2a81abf00a96,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33bb418967f323981293619c300cd658dfb1efb2617606f6872ec9443138d8d4,PodSandboxId:d86e657235d1
aee688b3d4777827dc899fbf0d085c23dbc7f847861897fb0987,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726853845803445826,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-fj4mf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb63308-9d43-444e-b31b-a5efeef5d323,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:9bf9d5c0ec974954d96f24bfd70172a55a8168c40434b059b8bb1c9a2f044392,PodSandboxId:e35a21da42d2900dc4ad0b52f7136a873eb67efa752aed09943f6793a097132e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726853832813835694,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a3b7852-a919-4f95-9e5c-20ead0de76ad,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c860a4c507c477109d4747b1b074c68a4a31b9aeee2cfc9591edda6f92a49c41,PodSandboxId:5c51ff7a22efee9bfed73a4683dfae61105461d71e774cd60d35b169d58701f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726853822536970769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339440d6-4355-4e26-a436-2edefb4d7b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56bab5bc8aac1da4daf358eadce72c458a49fba19d4c18106120004ede4b716,PodSandboxId:6cb2d547f55f043965d896f833e8e278cd9ea490c81edd68102d7c2c5eb333bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726853820362351318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dsxdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371b6ad-8f6e-4474-a677-f07c0b4e0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92cf3212ca3856ac30692de35be4bf7391dbf53d3b71d366bbd05e33353b54b5,PodSandboxId:28727162eae183907c195d9fd5223acf57032a1040947cea5dfa9b15cfe6dd47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726853817570982707,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klvxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 6edcd5de-35eb-4e5b-8073-e2a49428b300,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f013b45bfd968d0cf23514647a630ce699e0ed7b9a36138115cce03563ebd0ef,PodSandboxId:fa1a7012d3ccb9a78a9cc3d8d35ee4a4aa883415cdd8fe1eda6bd57d5483df19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726853806327216995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a8f619038c0e1f5f5e421f1961f8a4,},Annot
ations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48780df679f85764217ee650d3268dfc7988e43dd065577ee1d4a41b3b94f2c,PodSandboxId:fd2c93f89c728325fe986e081ce5e22caf7056060693eac9660634d886e81823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726853806316679007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc8d6b591a917dcaa84c49b09e7c78a,},Annotations:map[string
]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6b3339abef5b932a0a51290bacbf5ea276d9c2f651978f0fb128032a963ff0,PodSandboxId:fd6f30369eff5dcb42ee83f7ad74d1d0fa801c373f8920a44d3e06edba2e06d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726853806319373192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97255a7c0e075db6f5e083c1ea277628,},Annotations:
map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad81a4e34c0219e34fccc748356a548ec1f96c3939761411d1393005f3368bd,PodSandboxId:7b642a7bdd5a2c7525004ed61914d2fc58ca1f4c403f001bb0d898ef73e618c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726853806168616121,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548b4fadf5e5a756eea840e162d03eb0,},Annotations:map[string]string
{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3f9bc4e-e38e-4436-a318-0bf098be0ea3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.785754135Z" level=debug msg="Unmounted container 6a47d5ced11867df2542b7b98427596e52c80e9913f8ceab9aa2748b2aed2300" file="storage/runtime.go:495" id=a6685c5d-0548-4835-8a51-c6cafb09891b name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.798460497Z" level=debug msg="Found exit code for 6a47d5ced11867df2542b7b98427596e52c80e9913f8ceab9aa2748b2aed2300: 2" file="oci/runtime_oci.go:1022"
	Sep 20 17:47:46 addons-679190 crio[661]: time="2024-09-20 17:47:46.798625434Z" level=debug msg="Skipping status update for: &{State:{Version:1.0.2-dev ID:6a47d5ced11867df2542b7b98427596e52c80e9913f8ceab9aa2748b2aed2300 Status:stopped Pid:0 Bundle:/run/containers/storage/overlay-containers/6a47d5ced11867df2542b7b98427596e52c80e9913f8ceab9aa2748b2aed2300/userdata Annotations:map[io.container.manager:cri-o io.kubernetes.container.hash:b7d21815 io.kubernetes.container.name:volume-snapshot-controller io.kubernetes.container.restartCount:0 io.kubernetes.container.terminationMessagePath:/dev/termination-log io.kubernetes.container.terminationMessagePolicy:File io.kubernetes.cri-o.Annotations:{\"io.kubernetes.container.hash\":\"b7d21815\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"} io.kubernetes.cri-o.ContainerID:6a47
d5ced11867df2542b7b98427596e52c80e9913f8ceab9aa2748b2aed2300 io.kubernetes.cri-o.ContainerType:container io.kubernetes.cri-o.Created:2024-09-20T17:38:00.027993682Z io.kubernetes.cri-o.IP.0:10.244.0.12 io.kubernetes.cri-o.Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922 io.kubernetes.cri-o.ImageName:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 io.kubernetes.cri-o.ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c io.kubernetes.cri-o.Labels:{\"io.kubernetes.container.name\":\"volume-snapshot-controller\",\"io.kubernetes.pod.name\":\"snapshot-controller-56fcc65765-cwbl2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f\"} io.kubernetes.cri-o.LogPath:/var/log/pods/kube-system_snapshot-controller-56fcc65765-cwbl2_6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f/volume-snapshot-controller/0.
log io.kubernetes.cri-o.Metadata:{\"name\":\"volume-snapshot-controller\"} io.kubernetes.cri-o.MountPoint:/var/lib/containers/storage/overlay/b67af9eaa209b47c0808f69def90e5481b2a45b97146bd7d0c9826b470bf96b4/merged io.kubernetes.cri-o.Name:k8s_volume-snapshot-controller_snapshot-controller-56fcc65765-cwbl2_kube-system_6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f_0 io.kubernetes.cri-o.PlatformRuntimePath: io.kubernetes.cri-o.ResolvPath:/var/run/containers/storage/overlay-containers/1e7aef814dca96cb514f8adb189bb3b15bb2072a02e62607acbfe178eb6bcac4/userdata/resolv.conf io.kubernetes.cri-o.SandboxID:1e7aef814dca96cb514f8adb189bb3b15bb2072a02e62607acbfe178eb6bcac4 io.kubernetes.cri-o.SandboxName:k8s_snapshot-controller-56fcc65765-cwbl2_kube-system_6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f_0 io.kubernetes.cri-o.SeccompProfilePath:Unconfined io.kubernetes.cri-o.Stdin:false io.kubernetes.cri-o.StdinOnce:false io.kubernetes.cri-o.TTY:false io.kubernetes.cri-o.Volumes:[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubel
et/pods/6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f/containers/volume-snapshot-controller/b4e0eb26\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f/volumes/kubernetes.io~projected/kube-api-access-2vv5x\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}] io.kubernetes.pod.name:snapshot-controller-56fcc65765-cwbl2 io.kubernetes.pod.namespace:kube-system io.kubernetes.pod.terminationGracePeriod:30 io.kubernetes.pod.uid:6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f kubernetes.io/config.seen:2024-09-20T17:37:03.435481746Z kubernetes.io/config.source:api]} Created:2024-09-20 17:38:00.086751214 +0000 UTC Started:2024-09-20 17:38:00.115414596 +0000 UTC m=+83.145715230
Finished:2024-09-20 17:47:46.675998672 +0000 UTC ExitCode:0xc00118d2e8 OOMKilled:false SeccompKilled:false Error: InitPid:4777 InitStartTime:10142 CheckpointedAt:0001-01-01 00:00:00 +0000 UTC}" file="oci/runtime_oci.go:946" id=a6685c5d-0548-4835-8a51-c6cafb09891b name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                         ATTEMPT             POD ID              POD
	19b33ea2ca1e9       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              24 seconds ago      Running             nginx                        0                   baa7b5cea9fa1       nginx
	ec0b4062645fc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago       Running             gcp-auth                     0                   5339d36289fab       gcp-auth-89d5ffd79-58447
	2eaa551f6ba5a       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago       Running             controller                   0                   560439494ad10       ingress-nginx-controller-bc57996ff-rvfrg
	3825f7af126e5       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             9 minutes ago       Exited              patch                        1                   18386a0016b7d       ingress-nginx-admission-patch-44nll
	23688c0ebf707       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              create                       0                   697ab83fd20d4       ingress-nginx-admission-create-85mv4
	6a47d5ced1186       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922      9 minutes ago       Exited              volume-snapshot-controller   0                   1e7aef814dca9       snapshot-controller-56fcc65765-cwbl2
	c0083fc45bb0b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922      9 minutes ago       Exited              volume-snapshot-controller   0                   54b5b96a2e708       snapshot-controller-56fcc65765-5qmkt
	f37f4284c136c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             10 minutes ago      Running             local-path-provisioner       0                   0e75080753c37       local-path-provisioner-86d989889c-vcxc2
	33bb418967f32       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        10 minutes ago      Running             metrics-server               0                   d86e657235d1a       metrics-server-84c5f94fbc-fj4mf
	9bf9d5c0ec974       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns         0                   e35a21da42d29       kube-ingress-dns-minikube
	c860a4c507c47       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner          0                   5c51ff7a22efe       storage-provisioner
	e56bab5bc8aac       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             10 minutes ago      Running             coredns                      0                   6cb2d547f55f0       coredns-7c65d6cfc9-dsxdk
	92cf3212ca385       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             10 minutes ago      Running             kube-proxy                   0                   28727162eae18       kube-proxy-klvxz
	f013b45bfd968       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             11 minutes ago      Running             etcd                         0                   fa1a7012d3ccb       etcd-addons-679190
	6b6b3339abef5       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             11 minutes ago      Running             kube-controller-manager      0                   fd6f30369eff5       kube-controller-manager-addons-679190
	b48780df679f8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             11 minutes ago      Running             kube-scheduler               0                   fd2c93f89c728       kube-scheduler-addons-679190
	1ad81a4e34c02       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             11 minutes ago      Running             kube-apiserver               0                   7b642a7bdd5a2       kube-apiserver-addons-679190
	
	
	==> coredns [e56bab5bc8aac1da4daf358eadce72c458a49fba19d4c18106120004ede4b716] <==
	[INFO] 10.244.0.6:43873 - 60318 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000102028s
	[INFO] 10.244.0.6:49634 - 1642 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000165365s
	[INFO] 10.244.0.6:49634 - 14697 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000162336s
	[INFO] 10.244.0.6:51034 - 55160 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094365s
	[INFO] 10.244.0.6:51034 - 10106 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065072s
	[INFO] 10.244.0.6:60315 - 40487 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043683s
	[INFO] 10.244.0.6:60315 - 43321 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084501s
	[INFO] 10.244.0.6:35891 - 53873 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000042787s
	[INFO] 10.244.0.6:35891 - 34419 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000033205s
	[INFO] 10.244.0.6:33447 - 25628 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150214s
	[INFO] 10.244.0.6:33447 - 35042 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000039143s
	[INFO] 10.244.0.6:40512 - 59984 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083424s
	[INFO] 10.244.0.6:40512 - 63570 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038001s
	[INFO] 10.244.0.6:45833 - 63289 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052986s
	[INFO] 10.244.0.6:45833 - 1087 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027214s
	[INFO] 10.244.0.6:44945 - 60461 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000039073s
	[INFO] 10.244.0.6:44945 - 33323 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000036767s
	[INFO] 10.244.0.21:44980 - 55236 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000537238s
	[INFO] 10.244.0.21:57739 - 29484 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00030418s
	[INFO] 10.244.0.21:36936 - 49312 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000192738s
	[INFO] 10.244.0.21:55426 - 11322 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000253353s
	[INFO] 10.244.0.21:57850 - 37730 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000183379s
	[INFO] 10.244.0.21:53881 - 17609 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00016102s
	[INFO] 10.244.0.21:43516 - 53661 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001411255s
	[INFO] 10.244.0.21:49825 - 60779 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000963325s
	
	
	==> describe nodes <==
	Name:               addons-679190
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-679190
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=addons-679190
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_36_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-679190
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:36:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-679190
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:47:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:47:23 +0000   Fri, 20 Sep 2024 17:36:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:47:23 +0000   Fri, 20 Sep 2024 17:36:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:47:23 +0000   Fri, 20 Sep 2024 17:36:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:47:23 +0000   Fri, 20 Sep 2024 17:36:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.158
	  Hostname:    addons-679190
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 be44d0789ac247d0942761612c630a1f
	  System UUID:                be44d078-9ac2-47d0-9427-61612c630a1f
	  Boot ID:                    b2360fab-23fa-467c-99ca-2729b31c70c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  gcp-auth                    gcp-auth-89d5ffd79-58447                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-rvfrg    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-dsxdk                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-679190                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-679190                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-679190       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-klvxz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-679190                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-fj4mf             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          local-path-provisioner-86d989889c-vcxc2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-679190 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-679190 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-679190 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-679190 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-679190 event: Registered Node addons-679190 in Controller
	
	
	==> dmesg <==
	[  +5.033044] kauditd_printk_skb: 109 callbacks suppressed
	[  +5.584219] kauditd_printk_skb: 65 callbacks suppressed
	[ +15.750642] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.216234] kauditd_printk_skb: 15 callbacks suppressed
	[  +9.065331] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.958265] kauditd_printk_skb: 4 callbacks suppressed
	[Sep20 17:38] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.888314] kauditd_printk_skb: 42 callbacks suppressed
	[ +10.053683] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.312437] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.622571] kauditd_printk_skb: 54 callbacks suppressed
	[ +26.074732] kauditd_printk_skb: 13 callbacks suppressed
	[Sep20 17:39] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 17:41] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 17:43] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 17:46] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.504877] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.087829] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.196655] kauditd_printk_skb: 59 callbacks suppressed
	[  +7.642956] kauditd_printk_skb: 1 callbacks suppressed
	[Sep20 17:47] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.859221] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.845694] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.862171] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.689570] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [f013b45bfd968d0cf23514647a630ce699e0ed7b9a36138115cce03563ebd0ef] <==
	{"level":"warn","ts":"2024-09-20T17:38:10.652277Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"446.317258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:38:10.652308Z","caller":"traceutil/trace.go:171","msg":"trace[1537739953] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1040; }","duration":"446.348193ms","start":"2024-09-20T17:38:10.205954Z","end":"2024-09-20T17:38:10.652302Z","steps":["trace[1537739953] 'agreement among raft nodes before linearized reading'  (duration: 446.305109ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:38:10.652325Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:38:10.205878Z","time spent":"446.442946ms","remote":"127.0.0.1:49978","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-20T17:38:10.652384Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.30745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-20T17:38:10.652414Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"327.397124ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:38:10.652445Z","caller":"traceutil/trace.go:171","msg":"trace[1075324407] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1040; }","duration":"327.426462ms","start":"2024-09-20T17:38:10.325013Z","end":"2024-09-20T17:38:10.652439Z","steps":["trace[1075324407] 'agreement among raft nodes before linearized reading'  (duration: 327.387735ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:38:10.652430Z","caller":"traceutil/trace.go:171","msg":"trace[133845077] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1040; }","duration":"184.35459ms","start":"2024-09-20T17:38:10.468068Z","end":"2024-09-20T17:38:10.652423Z","steps":["trace[133845077] 'agreement among raft nodes before linearized reading'  (duration: 184.292099ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:38:10.653015Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.844009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:38:10.653104Z","caller":"traceutil/trace.go:171","msg":"trace[209721001] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; response_count:0; response_revision:1040; }","duration":"175.93602ms","start":"2024-09-20T17:38:10.477160Z","end":"2024-09-20T17:38:10.653096Z","steps":["trace[209721001] 'agreement among raft nodes before linearized reading'  (duration: 175.83519ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:38:10.653637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"380.604231ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-09-20T17:38:10.653728Z","caller":"traceutil/trace.go:171","msg":"trace[747134196] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1040; }","duration":"380.696537ms","start":"2024-09-20T17:38:10.273023Z","end":"2024-09-20T17:38:10.653720Z","steps":["trace[747134196] 'agreement among raft nodes before linearized reading'  (duration: 380.53387ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:38:10.653804Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:38:10.272989Z","time spent":"380.80517ms","remote":"127.0.0.1:49862","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":170,"response size":31,"request content":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true "}
	{"level":"info","ts":"2024-09-20T17:39:02.358224Z","caller":"traceutil/trace.go:171","msg":"trace[1180894535] transaction","detail":"{read_only:false; response_revision:1253; number_of_response:1; }","duration":"124.748523ms","start":"2024-09-20T17:39:02.233450Z","end":"2024-09-20T17:39:02.358199Z","steps":["trace[1180894535] 'process raft request'  (duration: 124.628143ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:46:47.805022Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1506}
	{"level":"info","ts":"2024-09-20T17:46:47.839955Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1506,"took":"34.098256ms","hash":2552434771,"current-db-size-bytes":7012352,"current-db-size":"7.0 MB","current-db-size-in-use-bytes":3891200,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2024-09-20T17:46:47.840019Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2552434771,"revision":1506,"compact-revision":-1}
	{"level":"warn","ts":"2024-09-20T17:46:50.466442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.994871ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:46:50.466495Z","caller":"traceutil/trace.go:171","msg":"trace[1650142898] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2057; }","duration":"139.081796ms","start":"2024-09-20T17:46:50.327400Z","end":"2024-09-20T17:46:50.466482Z","steps":["trace[1650142898] 'range keys from in-memory index tree'  (duration: 138.980039ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:46:50.466583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.518533ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:46:50.466595Z","caller":"traceutil/trace.go:171","msg":"trace[2129239253] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2057; }","duration":"128.546109ms","start":"2024-09-20T17:46:50.338045Z","end":"2024-09-20T17:46:50.466591Z","steps":["trace[2129239253] 'range keys from in-memory index tree'  (duration: 128.454845ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:46:52.057090Z","caller":"traceutil/trace.go:171","msg":"trace[2010592287] transaction","detail":"{read_only:false; response_revision:2066; number_of_response:1; }","duration":"142.531828ms","start":"2024-09-20T17:46:51.914533Z","end":"2024-09-20T17:46:52.057065Z","steps":["trace[2010592287] 'process raft request'  (duration: 142.444566ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:46:52.274053Z","caller":"traceutil/trace.go:171","msg":"trace[1207827034] transaction","detail":"{read_only:false; response_revision:2067; number_of_response:1; }","duration":"307.822812ms","start":"2024-09-20T17:46:51.966218Z","end":"2024-09-20T17:46:52.274041Z","steps":["trace[1207827034] 'process raft request'  (duration: 307.159868ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:46:52.276791Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:46:51.966199Z","time spent":"310.498319ms","remote":"127.0.0.1:50070","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-679190\" mod_revision:1964 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-679190\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-679190\" > >"}
	{"level":"info","ts":"2024-09-20T17:47:05.086002Z","caller":"traceutil/trace.go:171","msg":"trace[426642049] transaction","detail":"{read_only:false; response_revision:2125; number_of_response:1; }","duration":"360.035966ms","start":"2024-09-20T17:47:04.725950Z","end":"2024-09-20T17:47:05.085986Z","steps":["trace[426642049] 'process raft request'  (duration: 359.86959ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:47:05.086166Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:47:04.725893Z","time spent":"360.202664ms","remote":"127.0.0.1:49862","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":764,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-zvp8w.17f704774ef9a14d\" mod_revision:1538 > success:<request_put:<key:\"/registry/events/gadget/gadget-zvp8w.17f704774ef9a14d\" value_size:693 lease:8396277679900029684 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-zvp8w.17f704774ef9a14d\" > >"}
	
	
	==> gcp-auth [ec0b4062645fc2c1c8c82cdce410360489fe6600fb411ea6d60c712a5c12813f] <==
	2024/09/20 17:38:30 Ready to write response ...
	2024/09/20 17:38:30 Ready to marshal response ...
	2024/09/20 17:38:30 Ready to write response ...
	2024/09/20 17:38:30 Ready to marshal response ...
	2024/09/20 17:38:30 Ready to write response ...
	2024/09/20 17:46:33 Ready to marshal response ...
	2024/09/20 17:46:33 Ready to write response ...
	2024/09/20 17:46:33 Ready to marshal response ...
	2024/09/20 17:46:33 Ready to write response ...
	2024/09/20 17:46:44 Ready to marshal response ...
	2024/09/20 17:46:44 Ready to write response ...
	2024/09/20 17:46:46 Ready to marshal response ...
	2024/09/20 17:46:46 Ready to write response ...
	2024/09/20 17:46:46 Ready to marshal response ...
	2024/09/20 17:46:46 Ready to write response ...
	2024/09/20 17:46:46 Ready to marshal response ...
	2024/09/20 17:46:46 Ready to write response ...
	2024/09/20 17:46:46 Ready to marshal response ...
	2024/09/20 17:46:46 Ready to write response ...
	2024/09/20 17:46:58 Ready to marshal response ...
	2024/09/20 17:46:58 Ready to write response ...
	2024/09/20 17:47:17 Ready to marshal response ...
	2024/09/20 17:47:17 Ready to write response ...
	2024/09/20 17:47:30 Ready to marshal response ...
	2024/09/20 17:47:30 Ready to write response ...
	
	
	==> kernel <==
	 17:47:47 up 11 min,  0 users,  load average: 0.28, 0.35, 0.33
	Linux addons-679190 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1ad81a4e34c0219e34fccc748356a548ec1f96c3939761411d1393005f3368bd] <==
	I0920 17:38:01.812005       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 17:38:01.813103       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 17:38:30.790581       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 17:38:30.791019       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 17:38:30.790878       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.117.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.117.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.117.29:443: connect: connection refused" logger="UnhandledError"
	I0920 17:38:30.830638       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0920 17:46:46.684183       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.95.13"}
	I0920 17:47:12.280076       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0920 17:47:12.410157       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	W0920 17:47:13.322883       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 17:47:17.844287       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 17:47:18.049415       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.232.246"}
	I0920 17:47:46.449796       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 17:47:46.449837       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 17:47:46.476080       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 17:47:46.476629       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 17:47:46.493047       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 17:47:46.493097       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 17:47:46.524567       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 17:47:46.524602       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 17:47:46.579848       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 17:47:46.579981       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [6b6b3339abef5b932a0a51290bacbf5ea276d9c2f651978f0fb128032a963ff0] <==
	I0920 17:47:00.372761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="4.473µs"
	I0920 17:47:03.064068       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-679190"
	I0920 17:47:10.685049       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	E0920 17:47:13.329330       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:47:14.583755       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:47:14.583816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:47:16.280670       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:47:16.280822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:47:19.806661       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:47:19.806718       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 17:47:22.450421       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0920 17:47:23.671311       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-679190"
	I0920 17:47:26.064680       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0920 17:47:26.064717       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 17:47:26.563165       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0920 17:47:26.563218       1 shared_informer.go:320] Caches are synced for garbage collector
	W0920 17:47:31.936565       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:47:31.936647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 17:47:39.622392       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0920 17:47:39.739285       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	I0920 17:47:40.218028       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-679190"
	I0920 17:47:45.311840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.209µs"
	W0920 17:47:46.174030       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:47:46.174124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 17:47:46.615422       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="9.228µs"
	
	
	==> kube-proxy [92cf3212ca3856ac30692de35be4bf7391dbf53d3b71d366bbd05e33353b54b5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:36:58.351748       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:36:58.367671       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.158"]
	E0920 17:36:58.367735       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:36:58.429527       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:36:58.429561       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:36:58.429586       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:36:58.435086       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:36:58.435354       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:36:58.435365       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:36:58.438303       1 config.go:199] "Starting service config controller"
	I0920 17:36:58.438316       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:36:58.438348       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:36:58.438352       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:36:58.441613       1 config.go:328] "Starting node config controller"
	I0920 17:36:58.441622       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:36:58.539182       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 17:36:58.539245       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:36:58.541947       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b48780df679f85764217ee650d3268dfc7988e43dd065577ee1d4a41b3b94f2c] <==
	W0920 17:36:49.950209       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:36:49.950280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:49.987691       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 17:36:49.987738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.025498       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 17:36:50.025544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.093076       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 17:36:50.093121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.124089       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:36:50.124138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.155077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 17:36:50.155234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.155746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 17:36:50.155937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.223176       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:36:50.223220       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.247365       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:36:50.247426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.317547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 17:36:50.317600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.383564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:36:50.383615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.534598       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 17:36:50.534719       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 17:36:53.562785       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 17:47:42 addons-679190 kubelet[1208]: E0920 17:47:42.184075    1208 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854462183548983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:550632,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:47:44 addons-679190 kubelet[1208]: I0920 17:47:44.956082    1208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thwhg\" (UniqueName: \"kubernetes.io/projected/276f4168-eb9c-4301-8955-d04a762140df-kube-api-access-thwhg\") pod \"276f4168-eb9c-4301-8955-d04a762140df\" (UID: \"276f4168-eb9c-4301-8955-d04a762140df\") "
	Sep 20 17:47:44 addons-679190 kubelet[1208]: I0920 17:47:44.956133    1208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/276f4168-eb9c-4301-8955-d04a762140df-gcp-creds\") pod \"276f4168-eb9c-4301-8955-d04a762140df\" (UID: \"276f4168-eb9c-4301-8955-d04a762140df\") "
	Sep 20 17:47:44 addons-679190 kubelet[1208]: I0920 17:47:44.956208    1208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/276f4168-eb9c-4301-8955-d04a762140df-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "276f4168-eb9c-4301-8955-d04a762140df" (UID: "276f4168-eb9c-4301-8955-d04a762140df"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 17:47:44 addons-679190 kubelet[1208]: I0920 17:47:44.959870    1208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/276f4168-eb9c-4301-8955-d04a762140df-kube-api-access-thwhg" (OuterVolumeSpecName: "kube-api-access-thwhg") pod "276f4168-eb9c-4301-8955-d04a762140df" (UID: "276f4168-eb9c-4301-8955-d04a762140df"). InnerVolumeSpecName "kube-api-access-thwhg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 17:47:45 addons-679190 kubelet[1208]: I0920 17:47:45.056814    1208 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-thwhg\" (UniqueName: \"kubernetes.io/projected/276f4168-eb9c-4301-8955-d04a762140df-kube-api-access-thwhg\") on node \"addons-679190\" DevicePath \"\""
	Sep 20 17:47:45 addons-679190 kubelet[1208]: I0920 17:47:45.056848    1208 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/276f4168-eb9c-4301-8955-d04a762140df-gcp-creds\") on node \"addons-679190\" DevicePath \"\""
	Sep 20 17:47:45 addons-679190 kubelet[1208]: I0920 17:47:45.647856    1208 scope.go:117] "RemoveContainer" containerID="7be8ed98ebd47974ce217afc7cbae98dbbf063603a24f289c5a4d25cccc4ab17"
	Sep 20 17:47:45 addons-679190 kubelet[1208]: I0920 17:47:45.713480    1208 scope.go:117] "RemoveContainer" containerID="7be8ed98ebd47974ce217afc7cbae98dbbf063603a24f289c5a4d25cccc4ab17"
	Sep 20 17:47:45 addons-679190 kubelet[1208]: E0920 17:47:45.715062    1208 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7be8ed98ebd47974ce217afc7cbae98dbbf063603a24f289c5a4d25cccc4ab17\": container with ID starting with 7be8ed98ebd47974ce217afc7cbae98dbbf063603a24f289c5a4d25cccc4ab17 not found: ID does not exist" containerID="7be8ed98ebd47974ce217afc7cbae98dbbf063603a24f289c5a4d25cccc4ab17"
	Sep 20 17:47:45 addons-679190 kubelet[1208]: I0920 17:47:45.715169    1208 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7be8ed98ebd47974ce217afc7cbae98dbbf063603a24f289c5a4d25cccc4ab17"} err="failed to get container status \"7be8ed98ebd47974ce217afc7cbae98dbbf063603a24f289c5a4d25cccc4ab17\": rpc error: code = NotFound desc = could not find container \"7be8ed98ebd47974ce217afc7cbae98dbbf063603a24f289c5a4d25cccc4ab17\": container with ID starting with 7be8ed98ebd47974ce217afc7cbae98dbbf063603a24f289c5a4d25cccc4ab17 not found: ID does not exist"
	Sep 20 17:47:45 addons-679190 kubelet[1208]: I0920 17:47:45.737213    1208 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="276f4168-eb9c-4301-8955-d04a762140df" path="/var/lib/kubelet/pods/276f4168-eb9c-4301-8955-d04a762140df/volumes"
	Sep 20 17:47:45 addons-679190 kubelet[1208]: I0920 17:47:45.765837    1208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwm8n\" (UniqueName: \"kubernetes.io/projected/4ad8ab0b-f43b-475a-984c-11d2a23963c0-kube-api-access-rwm8n\") pod \"4ad8ab0b-f43b-475a-984c-11d2a23963c0\" (UID: \"4ad8ab0b-f43b-475a-984c-11d2a23963c0\") "
	Sep 20 17:47:45 addons-679190 kubelet[1208]: I0920 17:47:45.774464    1208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad8ab0b-f43b-475a-984c-11d2a23963c0-kube-api-access-rwm8n" (OuterVolumeSpecName: "kube-api-access-rwm8n") pod "4ad8ab0b-f43b-475a-984c-11d2a23963c0" (UID: "4ad8ab0b-f43b-475a-984c-11d2a23963c0"). InnerVolumeSpecName "kube-api-access-rwm8n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 17:47:45 addons-679190 kubelet[1208]: I0920 17:47:45.867382    1208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtx9n\" (UniqueName: \"kubernetes.io/projected/0612b678-15da-44d6-acfb-c29dd8dd2b7d-kube-api-access-gtx9n\") pod \"0612b678-15da-44d6-acfb-c29dd8dd2b7d\" (UID: \"0612b678-15da-44d6-acfb-c29dd8dd2b7d\") "
	Sep 20 17:47:45 addons-679190 kubelet[1208]: I0920 17:47:45.867503    1208 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rwm8n\" (UniqueName: \"kubernetes.io/projected/4ad8ab0b-f43b-475a-984c-11d2a23963c0-kube-api-access-rwm8n\") on node \"addons-679190\" DevicePath \"\""
	Sep 20 17:47:45 addons-679190 kubelet[1208]: I0920 17:47:45.870355    1208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0612b678-15da-44d6-acfb-c29dd8dd2b7d-kube-api-access-gtx9n" (OuterVolumeSpecName: "kube-api-access-gtx9n") pod "0612b678-15da-44d6-acfb-c29dd8dd2b7d" (UID: "0612b678-15da-44d6-acfb-c29dd8dd2b7d"). InnerVolumeSpecName "kube-api-access-gtx9n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 17:47:45 addons-679190 kubelet[1208]: I0920 17:47:45.968416    1208 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gtx9n\" (UniqueName: \"kubernetes.io/projected/0612b678-15da-44d6-acfb-c29dd8dd2b7d-kube-api-access-gtx9n\") on node \"addons-679190\" DevicePath \"\""
	Sep 20 17:47:46 addons-679190 kubelet[1208]: I0920 17:47:46.685691    1208 scope.go:117] "RemoveContainer" containerID="88333eb2bd8520bc6d9f2ec01e1b00cf091952a98ce6b22d632ecbbae8ba9b7f"
	Sep 20 17:47:47 addons-679190 kubelet[1208]: I0920 17:47:47.076237    1208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vv5x\" (UniqueName: \"kubernetes.io/projected/6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f-kube-api-access-2vv5x\") pod \"6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f\" (UID: \"6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f\") "
	Sep 20 17:47:47 addons-679190 kubelet[1208]: I0920 17:47:47.078111    1208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f-kube-api-access-2vv5x" (OuterVolumeSpecName: "kube-api-access-2vv5x") pod "6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f" (UID: "6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f"). InnerVolumeSpecName "kube-api-access-2vv5x". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 17:47:47 addons-679190 kubelet[1208]: I0920 17:47:47.177507    1208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v9pt\" (UniqueName: \"kubernetes.io/projected/a4b880c1-bd61-45bb-80cf-ae1dd6af7d4e-kube-api-access-5v9pt\") pod \"a4b880c1-bd61-45bb-80cf-ae1dd6af7d4e\" (UID: \"a4b880c1-bd61-45bb-80cf-ae1dd6af7d4e\") "
	Sep 20 17:47:47 addons-679190 kubelet[1208]: I0920 17:47:47.177590    1208 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2vv5x\" (UniqueName: \"kubernetes.io/projected/6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f-kube-api-access-2vv5x\") on node \"addons-679190\" DevicePath \"\""
	Sep 20 17:47:47 addons-679190 kubelet[1208]: I0920 17:47:47.179353    1208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b880c1-bd61-45bb-80cf-ae1dd6af7d4e-kube-api-access-5v9pt" (OuterVolumeSpecName: "kube-api-access-5v9pt") pod "a4b880c1-bd61-45bb-80cf-ae1dd6af7d4e" (UID: "a4b880c1-bd61-45bb-80cf-ae1dd6af7d4e"). InnerVolumeSpecName "kube-api-access-5v9pt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 17:47:47 addons-679190 kubelet[1208]: I0920 17:47:47.278106    1208 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5v9pt\" (UniqueName: \"kubernetes.io/projected/a4b880c1-bd61-45bb-80cf-ae1dd6af7d4e-kube-api-access-5v9pt\") on node \"addons-679190\" DevicePath \"\""
	
	
	==> storage-provisioner [c860a4c507c477109d4747b1b074c68a4a31b9aeee2cfc9591edda6f92a49c41] <==
	I0920 17:37:04.229131       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 17:37:04.316870       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 17:37:04.316984       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 17:37:04.369383       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 17:37:04.369574       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-679190_755bbb3d-4dd9-4398-8fc4-ca84bbdfa577!
	I0920 17:37:04.369640       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1220de9-2330-4b06-bc0f-6bb70dd8d11a", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-679190_755bbb3d-4dd9-4398-8fc4-ca84bbdfa577 became leader
	I0920 17:37:04.470051       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-679190_755bbb3d-4dd9-4398-8fc4-ca84bbdfa577!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-679190 -n addons-679190
helpers_test.go:261: (dbg) Run:  kubectl --context addons-679190 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-85mv4 ingress-nginx-admission-patch-44nll
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-679190 describe pod busybox ingress-nginx-admission-create-85mv4 ingress-nginx-admission-patch-44nll
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-679190 describe pod busybox ingress-nginx-admission-create-85mv4 ingress-nginx-admission-patch-44nll: exit status 1 (74.310559ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-679190/192.168.39.158
	Start Time:       Fri, 20 Sep 2024 17:38:30 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n77bn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n77bn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m18s                   default-scheduler  Successfully assigned default/busybox to addons-679190
	  Normal   Pulling    7m45s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m45s (x4 over 9m17s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m45s (x4 over 9m17s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m32s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m17s (x20 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-85mv4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-44nll" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-679190 describe pod busybox ingress-nginx-admission-create-85mv4 ingress-nginx-admission-patch-44nll: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (152.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-679190 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-679190 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-679190 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [719ce5c1-7853-4fc9-8fd3-7725aba7ed0c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [719ce5c1-7853-4fc9-8fd3-7725aba7ed0c] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005117994s
I0920 17:47:28.094837  244849 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-679190 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.408211468s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-679190 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.158
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-679190 addons disable ingress-dns --alsologtostderr -v=1: (1.263752978s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-679190 addons disable ingress --alsologtostderr -v=1: (7.701164064s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-679190 -n addons-679190
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-679190 logs -n 25: (1.25625476s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| delete  | -p download-only-799771                                                                     | download-only-799771 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| delete  | -p download-only-591101                                                                     | download-only-591101 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| delete  | -p download-only-799771                                                                     | download-only-799771 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-242308 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC |                     |
	|         | binary-mirror-242308                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46511                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-242308                                                                     | binary-mirror-242308 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| addons  | disable dashboard -p                                                                        | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC |                     |
	|         | addons-679190                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC |                     |
	|         | addons-679190                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-679190 --wait=true                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:38 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | -p addons-679190                                                                            |                      |         |         |                     |                     |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | -p addons-679190                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-679190 ssh cat                                                                       | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | /opt/local-path-provisioner/pvc-4a7cfa23-ab8c-4f3b-b69f-a32cbb6790dc_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | addons-679190                                                                               |                      |         |         |                     |                     |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:47 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC | 20 Sep 24 17:47 UTC |
	|         | addons-679190                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-679190 ssh curl -s                                                                   | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-679190 addons                                                                        | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC | 20 Sep 24 17:47 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-679190 ip                                                                            | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC | 20 Sep 24 17:47 UTC |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC | 20 Sep 24 17:47 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-679190 addons                                                                        | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC | 20 Sep 24 17:47 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-679190 ip                                                                            | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:49 UTC | 20 Sep 24 17:49 UTC |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:49 UTC | 20 Sep 24 17:49 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:49 UTC | 20 Sep 24 17:49 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:36:14
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:36:14.402655  245557 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:36:14.402933  245557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:36:14.402943  245557 out.go:358] Setting ErrFile to fd 2...
	I0920 17:36:14.402948  245557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:36:14.403159  245557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 17:36:14.403805  245557 out.go:352] Setting JSON to false
	I0920 17:36:14.404822  245557 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4717,"bootTime":1726849057,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:36:14.404931  245557 start.go:139] virtualization: kvm guest
	I0920 17:36:14.407275  245557 out.go:177] * [addons-679190] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:36:14.408502  245557 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 17:36:14.408541  245557 notify.go:220] Checking for updates...
	I0920 17:36:14.411057  245557 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:36:14.412803  245557 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:36:14.414198  245557 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:36:14.415792  245557 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:36:14.417282  245557 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:36:14.418952  245557 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:36:14.453245  245557 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 17:36:14.454782  245557 start.go:297] selected driver: kvm2
	I0920 17:36:14.454802  245557 start.go:901] validating driver "kvm2" against <nil>
	I0920 17:36:14.454819  245557 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:36:14.455638  245557 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:36:14.455744  245557 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:36:14.473296  245557 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:36:14.473373  245557 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:36:14.473597  245557 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:36:14.473630  245557 cni.go:84] Creating CNI manager for ""
	I0920 17:36:14.473686  245557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 17:36:14.473698  245557 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 17:36:14.473755  245557 start.go:340] cluster config:
	{Name:addons-679190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-679190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:36:14.473865  245557 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:36:14.475815  245557 out.go:177] * Starting "addons-679190" primary control-plane node in "addons-679190" cluster
	I0920 17:36:14.477065  245557 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:36:14.477119  245557 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 17:36:14.477134  245557 cache.go:56] Caching tarball of preloaded images
	I0920 17:36:14.477218  245557 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:36:14.477230  245557 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:36:14.477537  245557 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/config.json ...
	I0920 17:36:14.477565  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/config.json: {Name:mk111f108190ba76ef8034134b6af7b7147db588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:14.477758  245557 start.go:360] acquireMachinesLock for addons-679190: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:36:14.477823  245557 start.go:364] duration metric: took 47.775µs to acquireMachinesLock for "addons-679190"
	I0920 17:36:14.477861  245557 start.go:93] Provisioning new machine with config: &{Name:addons-679190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-679190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:36:14.477966  245557 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 17:36:14.479569  245557 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 17:36:14.479725  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:14.479766  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:14.495292  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36423
	I0920 17:36:14.495863  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:14.496485  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:14.496509  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:14.496865  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:14.497041  245557 main.go:141] libmachine: (addons-679190) Calling .GetMachineName
	I0920 17:36:14.497187  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:14.497338  245557 start.go:159] libmachine.API.Create for "addons-679190" (driver="kvm2")
	I0920 17:36:14.497372  245557 client.go:168] LocalClient.Create starting
	I0920 17:36:14.497411  245557 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 17:36:14.582390  245557 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 17:36:14.704786  245557 main.go:141] libmachine: Running pre-create checks...
	I0920 17:36:14.704815  245557 main.go:141] libmachine: (addons-679190) Calling .PreCreateCheck
	I0920 17:36:14.705320  245557 main.go:141] libmachine: (addons-679190) Calling .GetConfigRaw
	I0920 17:36:14.705938  245557 main.go:141] libmachine: Creating machine...
	I0920 17:36:14.705960  245557 main.go:141] libmachine: (addons-679190) Calling .Create
	I0920 17:36:14.706168  245557 main.go:141] libmachine: (addons-679190) Creating KVM machine...
	I0920 17:36:14.707572  245557 main.go:141] libmachine: (addons-679190) DBG | found existing default KVM network
	I0920 17:36:14.708407  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:14.708217  245579 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00020b330}
	I0920 17:36:14.708450  245557 main.go:141] libmachine: (addons-679190) DBG | created network xml: 
	I0920 17:36:14.708468  245557 main.go:141] libmachine: (addons-679190) DBG | <network>
	I0920 17:36:14.708486  245557 main.go:141] libmachine: (addons-679190) DBG |   <name>mk-addons-679190</name>
	I0920 17:36:14.708539  245557 main.go:141] libmachine: (addons-679190) DBG |   <dns enable='no'/>
	I0920 17:36:14.708569  245557 main.go:141] libmachine: (addons-679190) DBG |   
	I0920 17:36:14.708581  245557 main.go:141] libmachine: (addons-679190) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 17:36:14.708596  245557 main.go:141] libmachine: (addons-679190) DBG |     <dhcp>
	I0920 17:36:14.708609  245557 main.go:141] libmachine: (addons-679190) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 17:36:14.708620  245557 main.go:141] libmachine: (addons-679190) DBG |     </dhcp>
	I0920 17:36:14.708631  245557 main.go:141] libmachine: (addons-679190) DBG |   </ip>
	I0920 17:36:14.708640  245557 main.go:141] libmachine: (addons-679190) DBG |   
	I0920 17:36:14.708651  245557 main.go:141] libmachine: (addons-679190) DBG | </network>
	I0920 17:36:14.708660  245557 main.go:141] libmachine: (addons-679190) DBG | 
	I0920 17:36:14.714317  245557 main.go:141] libmachine: (addons-679190) DBG | trying to create private KVM network mk-addons-679190 192.168.39.0/24...
	I0920 17:36:14.786920  245557 main.go:141] libmachine: (addons-679190) DBG | private KVM network mk-addons-679190 192.168.39.0/24 created
	I0920 17:36:14.786967  245557 main.go:141] libmachine: (addons-679190) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190 ...
	I0920 17:36:14.786983  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:14.786868  245579 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:36:14.787006  245557 main.go:141] libmachine: (addons-679190) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 17:36:14.787026  245557 main.go:141] libmachine: (addons-679190) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 17:36:15.067231  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:15.067014  245579 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa...
	I0920 17:36:15.314104  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:15.313891  245579 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/addons-679190.rawdisk...
	I0920 17:36:15.314159  245557 main.go:141] libmachine: (addons-679190) DBG | Writing magic tar header
	I0920 17:36:15.314176  245557 main.go:141] libmachine: (addons-679190) DBG | Writing SSH key tar header
	I0920 17:36:15.314187  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:15.314075  245579 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190 ...
	I0920 17:36:15.314203  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190
	I0920 17:36:15.314278  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190 (perms=drwx------)
	I0920 17:36:15.314312  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:36:15.314323  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 17:36:15.314336  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:36:15.314343  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 17:36:15.314349  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 17:36:15.314357  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 17:36:15.314367  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:36:15.314379  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:36:15.314391  245557 main.go:141] libmachine: (addons-679190) Creating domain...
	I0920 17:36:15.314402  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:36:15.314413  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:36:15.314423  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home
	I0920 17:36:15.314435  245557 main.go:141] libmachine: (addons-679190) DBG | Skipping /home - not owner
	I0920 17:36:15.315774  245557 main.go:141] libmachine: (addons-679190) define libvirt domain using xml: 
	I0920 17:36:15.315815  245557 main.go:141] libmachine: (addons-679190) <domain type='kvm'>
	I0920 17:36:15.315826  245557 main.go:141] libmachine: (addons-679190)   <name>addons-679190</name>
	I0920 17:36:15.315834  245557 main.go:141] libmachine: (addons-679190)   <memory unit='MiB'>4000</memory>
	I0920 17:36:15.315864  245557 main.go:141] libmachine: (addons-679190)   <vcpu>2</vcpu>
	I0920 17:36:15.315882  245557 main.go:141] libmachine: (addons-679190)   <features>
	I0920 17:36:15.315888  245557 main.go:141] libmachine: (addons-679190)     <acpi/>
	I0920 17:36:15.315893  245557 main.go:141] libmachine: (addons-679190)     <apic/>
	I0920 17:36:15.315898  245557 main.go:141] libmachine: (addons-679190)     <pae/>
	I0920 17:36:15.315903  245557 main.go:141] libmachine: (addons-679190)     
	I0920 17:36:15.315908  245557 main.go:141] libmachine: (addons-679190)   </features>
	I0920 17:36:15.315915  245557 main.go:141] libmachine: (addons-679190)   <cpu mode='host-passthrough'>
	I0920 17:36:15.315933  245557 main.go:141] libmachine: (addons-679190)   
	I0920 17:36:15.315947  245557 main.go:141] libmachine: (addons-679190)   </cpu>
	I0920 17:36:15.315956  245557 main.go:141] libmachine: (addons-679190)   <os>
	I0920 17:36:15.315968  245557 main.go:141] libmachine: (addons-679190)     <type>hvm</type>
	I0920 17:36:15.315977  245557 main.go:141] libmachine: (addons-679190)     <boot dev='cdrom'/>
	I0920 17:36:15.315988  245557 main.go:141] libmachine: (addons-679190)     <boot dev='hd'/>
	I0920 17:36:15.315997  245557 main.go:141] libmachine: (addons-679190)     <bootmenu enable='no'/>
	I0920 17:36:15.316006  245557 main.go:141] libmachine: (addons-679190)   </os>
	I0920 17:36:15.316011  245557 main.go:141] libmachine: (addons-679190)   <devices>
	I0920 17:36:15.316017  245557 main.go:141] libmachine: (addons-679190)     <disk type='file' device='cdrom'>
	I0920 17:36:15.316028  245557 main.go:141] libmachine: (addons-679190)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/boot2docker.iso'/>
	I0920 17:36:15.316043  245557 main.go:141] libmachine: (addons-679190)       <target dev='hdc' bus='scsi'/>
	I0920 17:36:15.316055  245557 main.go:141] libmachine: (addons-679190)       <readonly/>
	I0920 17:36:15.316064  245557 main.go:141] libmachine: (addons-679190)     </disk>
	I0920 17:36:15.316073  245557 main.go:141] libmachine: (addons-679190)     <disk type='file' device='disk'>
	I0920 17:36:15.316112  245557 main.go:141] libmachine: (addons-679190)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:36:15.316140  245557 main.go:141] libmachine: (addons-679190)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/addons-679190.rawdisk'/>
	I0920 17:36:15.316151  245557 main.go:141] libmachine: (addons-679190)       <target dev='hda' bus='virtio'/>
	I0920 17:36:15.316156  245557 main.go:141] libmachine: (addons-679190)     </disk>
	I0920 17:36:15.316164  245557 main.go:141] libmachine: (addons-679190)     <interface type='network'>
	I0920 17:36:15.316171  245557 main.go:141] libmachine: (addons-679190)       <source network='mk-addons-679190'/>
	I0920 17:36:15.316176  245557 main.go:141] libmachine: (addons-679190)       <model type='virtio'/>
	I0920 17:36:15.316183  245557 main.go:141] libmachine: (addons-679190)     </interface>
	I0920 17:36:15.316188  245557 main.go:141] libmachine: (addons-679190)     <interface type='network'>
	I0920 17:36:15.316195  245557 main.go:141] libmachine: (addons-679190)       <source network='default'/>
	I0920 17:36:15.316200  245557 main.go:141] libmachine: (addons-679190)       <model type='virtio'/>
	I0920 17:36:15.316206  245557 main.go:141] libmachine: (addons-679190)     </interface>
	I0920 17:36:15.316219  245557 main.go:141] libmachine: (addons-679190)     <serial type='pty'>
	I0920 17:36:15.316228  245557 main.go:141] libmachine: (addons-679190)       <target port='0'/>
	I0920 17:36:15.316240  245557 main.go:141] libmachine: (addons-679190)     </serial>
	I0920 17:36:15.316250  245557 main.go:141] libmachine: (addons-679190)     <console type='pty'>
	I0920 17:36:15.316263  245557 main.go:141] libmachine: (addons-679190)       <target type='serial' port='0'/>
	I0920 17:36:15.316272  245557 main.go:141] libmachine: (addons-679190)     </console>
	I0920 17:36:15.316283  245557 main.go:141] libmachine: (addons-679190)     <rng model='virtio'>
	I0920 17:36:15.316295  245557 main.go:141] libmachine: (addons-679190)       <backend model='random'>/dev/random</backend>
	I0920 17:36:15.316305  245557 main.go:141] libmachine: (addons-679190)     </rng>
	I0920 17:36:15.316313  245557 main.go:141] libmachine: (addons-679190)     
	I0920 17:36:15.316348  245557 main.go:141] libmachine: (addons-679190)     
	I0920 17:36:15.316373  245557 main.go:141] libmachine: (addons-679190)   </devices>
	I0920 17:36:15.316383  245557 main.go:141] libmachine: (addons-679190) </domain>
	I0920 17:36:15.316393  245557 main.go:141] libmachine: (addons-679190) 
	I0920 17:36:15.320892  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:3c:0d:15 in network default
	I0920 17:36:15.321583  245557 main.go:141] libmachine: (addons-679190) Ensuring networks are active...
	I0920 17:36:15.321600  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:15.322455  245557 main.go:141] libmachine: (addons-679190) Ensuring network default is active
	I0920 17:36:15.322876  245557 main.go:141] libmachine: (addons-679190) Ensuring network mk-addons-679190 is active
	I0920 17:36:15.323465  245557 main.go:141] libmachine: (addons-679190) Getting domain xml...
	I0920 17:36:15.324200  245557 main.go:141] libmachine: (addons-679190) Creating domain...
	I0920 17:36:16.552011  245557 main.go:141] libmachine: (addons-679190) Waiting to get IP...
	I0920 17:36:16.552931  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:16.553409  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:16.553467  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:16.553404  245579 retry.go:31] will retry after 233.074861ms: waiting for machine to come up
	I0920 17:36:16.788019  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:16.788566  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:16.788598  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:16.788486  245579 retry.go:31] will retry after 254.61991ms: waiting for machine to come up
	I0920 17:36:17.044950  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:17.045459  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:17.045481  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:17.045403  245579 retry.go:31] will retry after 378.47406ms: waiting for machine to come up
	I0920 17:36:17.424996  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:17.425465  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:17.425530  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:17.425456  245579 retry.go:31] will retry after 555.098735ms: waiting for machine to come up
	I0920 17:36:17.982414  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:17.982850  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:17.982872  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:17.982792  245579 retry.go:31] will retry after 674.733173ms: waiting for machine to come up
	I0920 17:36:18.658928  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:18.659386  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:18.659419  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:18.659377  245579 retry.go:31] will retry after 611.03774ms: waiting for machine to come up
	I0920 17:36:19.272181  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:19.272670  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:19.272694  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:19.272607  245579 retry.go:31] will retry after 945.481389ms: waiting for machine to come up
	I0920 17:36:20.219424  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:20.219953  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:20.219984  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:20.219887  245579 retry.go:31] will retry after 1.421505917s: waiting for machine to come up
	I0920 17:36:21.643502  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:21.643959  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:21.643984  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:21.643882  245579 retry.go:31] will retry after 1.172513378s: waiting for machine to come up
	I0920 17:36:22.818244  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:22.818633  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:22.818660  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:22.818591  245579 retry.go:31] will retry after 1.867074328s: waiting for machine to come up
	I0920 17:36:24.687694  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:24.688210  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:24.688237  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:24.688136  245579 retry.go:31] will retry after 2.905548451s: waiting for machine to come up
	I0920 17:36:27.597342  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:27.597969  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:27.597998  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:27.597896  245579 retry.go:31] will retry after 3.379184262s: waiting for machine to come up
	I0920 17:36:30.979086  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:30.979495  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:30.979519  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:30.979448  245579 retry.go:31] will retry after 3.110787974s: waiting for machine to come up
	I0920 17:36:34.093921  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.094329  245557 main.go:141] libmachine: (addons-679190) Found IP for machine: 192.168.39.158
	I0920 17:36:34.094349  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has current primary IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.094357  245557 main.go:141] libmachine: (addons-679190) Reserving static IP address...
	I0920 17:36:34.094749  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find host DHCP lease matching {name: "addons-679190", mac: "52:54:00:40:27:d9", ip: "192.168.39.158"} in network mk-addons-679190
	I0920 17:36:34.175576  245557 main.go:141] libmachine: (addons-679190) Reserved static IP address: 192.168.39.158
	I0920 17:36:34.175604  245557 main.go:141] libmachine: (addons-679190) DBG | Getting to WaitForSSH function...
	I0920 17:36:34.175611  245557 main.go:141] libmachine: (addons-679190) Waiting for SSH to be available...
	I0920 17:36:34.178818  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.179284  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.179318  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.179535  245557 main.go:141] libmachine: (addons-679190) DBG | Using SSH client type: external
	I0920 17:36:34.179710  245557 main.go:141] libmachine: (addons-679190) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa (-rw-------)
	I0920 17:36:34.179795  245557 main.go:141] libmachine: (addons-679190) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:36:34.179828  245557 main.go:141] libmachine: (addons-679190) DBG | About to run SSH command:
	I0920 17:36:34.179847  245557 main.go:141] libmachine: (addons-679190) DBG | exit 0
	I0920 17:36:34.306044  245557 main.go:141] libmachine: (addons-679190) DBG | SSH cmd err, output: <nil>: 
	I0920 17:36:34.306371  245557 main.go:141] libmachine: (addons-679190) KVM machine creation complete!
	I0920 17:36:34.306713  245557 main.go:141] libmachine: (addons-679190) Calling .GetConfigRaw
	I0920 17:36:34.307406  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:34.307658  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:34.307833  245557 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:36:34.307846  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:34.309410  245557 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:36:34.309438  245557 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:36:34.309444  245557 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:36:34.309450  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.312360  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.312741  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.312770  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.312993  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:34.313211  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.313408  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.313560  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:34.313751  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:34.314059  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:34.314074  245557 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:36:34.421222  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:36:34.421246  245557 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:36:34.421255  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.424519  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.424951  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.424984  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.425125  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:34.425370  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.425509  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.425630  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:34.425752  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:34.425952  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:34.425963  245557 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:36:34.534619  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:36:34.534731  245557 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:36:34.534745  245557 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:36:34.534753  245557 main.go:141] libmachine: (addons-679190) Calling .GetMachineName
	I0920 17:36:34.535038  245557 buildroot.go:166] provisioning hostname "addons-679190"
	I0920 17:36:34.535064  245557 main.go:141] libmachine: (addons-679190) Calling .GetMachineName
	I0920 17:36:34.535245  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.538122  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.538459  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.538489  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.538610  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:34.538795  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.538955  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.539101  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:34.539263  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:34.539465  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:34.539483  245557 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-679190 && echo "addons-679190" | sudo tee /etc/hostname
	I0920 17:36:34.663598  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-679190
	
	I0920 17:36:34.663632  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.666622  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.667078  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.667114  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.667316  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:34.667476  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.667667  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.667787  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:34.667933  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:34.668103  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:34.668119  245557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-679190' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-679190/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-679190' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:36:34.787041  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:36:34.787076  245557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 17:36:34.787136  245557 buildroot.go:174] setting up certificates
	I0920 17:36:34.787154  245557 provision.go:84] configureAuth start
	I0920 17:36:34.787172  245557 main.go:141] libmachine: (addons-679190) Calling .GetMachineName
	I0920 17:36:34.787485  245557 main.go:141] libmachine: (addons-679190) Calling .GetIP
	I0920 17:36:34.790870  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.791296  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.791324  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.791540  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.793848  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.794252  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.794283  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.794450  245557 provision.go:143] copyHostCerts
	I0920 17:36:34.794535  245557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 17:36:34.794685  245557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 17:36:34.794773  245557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 17:36:34.794847  245557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.addons-679190 san=[127.0.0.1 192.168.39.158 addons-679190 localhost minikube]
	I0920 17:36:34.890555  245557 provision.go:177] copyRemoteCerts
	I0920 17:36:34.890650  245557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:36:34.890686  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.893735  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.894102  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.894133  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.894315  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:34.894532  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.894715  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:34.894855  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:34.980634  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 17:36:35.005273  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:36:35.029188  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:36:35.052832  245557 provision.go:87] duration metric: took 265.657137ms to configureAuth
	I0920 17:36:35.052876  245557 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:36:35.053063  245557 config.go:182] Loaded profile config "addons-679190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:36:35.053145  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:35.056181  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.056518  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.056559  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.056787  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:35.056985  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.057136  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.057315  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:35.057524  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:35.057740  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:35.057756  245557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:36:35.573462  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:36:35.573493  245557 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:36:35.573502  245557 main.go:141] libmachine: (addons-679190) Calling .GetURL
	I0920 17:36:35.574853  245557 main.go:141] libmachine: (addons-679190) DBG | Using libvirt version 6000000
	I0920 17:36:35.576713  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.577033  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.577063  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.577214  245557 main.go:141] libmachine: Docker is up and running!
	I0920 17:36:35.577231  245557 main.go:141] libmachine: Reticulating splines...
	I0920 17:36:35.577240  245557 client.go:171] duration metric: took 21.079858169s to LocalClient.Create
	I0920 17:36:35.577264  245557 start.go:167] duration metric: took 21.079928938s to libmachine.API.Create "addons-679190"
	I0920 17:36:35.577275  245557 start.go:293] postStartSetup for "addons-679190" (driver="kvm2")
	I0920 17:36:35.577284  245557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:36:35.577302  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:35.577559  245557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:36:35.577583  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:35.579661  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.579997  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.580031  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.580129  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:35.580313  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.580436  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:35.580539  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:35.664189  245557 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:36:35.668353  245557 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:36:35.668386  245557 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 17:36:35.668464  245557 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 17:36:35.668487  245557 start.go:296] duration metric: took 91.20684ms for postStartSetup
	I0920 17:36:35.668527  245557 main.go:141] libmachine: (addons-679190) Calling .GetConfigRaw
	I0920 17:36:35.669134  245557 main.go:141] libmachine: (addons-679190) Calling .GetIP
	I0920 17:36:35.671946  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.672345  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.672368  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.672652  245557 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/config.json ...
	I0920 17:36:35.672885  245557 start.go:128] duration metric: took 21.194903618s to createHost
	I0920 17:36:35.672915  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:35.675216  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.675474  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.675498  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.675604  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:35.675764  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.675940  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.676046  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:35.676204  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:35.676362  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:35.676372  245557 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:36:35.786755  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726853795.758756532
	
	I0920 17:36:35.786780  245557 fix.go:216] guest clock: 1726853795.758756532
	I0920 17:36:35.786799  245557 fix.go:229] Guest: 2024-09-20 17:36:35.758756532 +0000 UTC Remote: 2024-09-20 17:36:35.672900424 +0000 UTC m=+21.305727812 (delta=85.856108ms)
	I0920 17:36:35.786847  245557 fix.go:200] guest clock delta is within tolerance: 85.856108ms
	I0920 17:36:35.786854  245557 start.go:83] releasing machines lock for "addons-679190", held for 21.309019314s
	I0920 17:36:35.786901  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:35.787199  245557 main.go:141] libmachine: (addons-679190) Calling .GetIP
	I0920 17:36:35.790139  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.790527  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.790550  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.790715  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:35.791190  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:35.791390  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:35.791498  245557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:36:35.791545  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:35.791598  245557 ssh_runner.go:195] Run: cat /version.json
	I0920 17:36:35.791651  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:35.794437  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.794670  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.794822  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.794852  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.795016  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:35.795136  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.795161  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.795193  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.795310  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:35.795381  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:35.795460  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.795532  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:35.795596  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:35.795696  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:35.911918  245557 ssh_runner.go:195] Run: systemctl --version
	I0920 17:36:35.917670  245557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:36:36.074996  245557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:36:36.080814  245557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:36:36.080895  245557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:36:36.096152  245557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:36:36.096189  245557 start.go:495] detecting cgroup driver to use...
	I0920 17:36:36.096260  245557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:36:36.113653  245557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:36:36.128855  245557 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:36:36.128933  245557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:36:36.143261  245557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:36:36.157398  245557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:36:36.266690  245557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:36:36.425266  245557 docker.go:233] disabling docker service ...
	I0920 17:36:36.425347  245557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:36:36.446451  245557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:36:36.459829  245557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:36:36.571061  245557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:36:36.683832  245557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:36:36.698810  245557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:36:36.718244  245557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:36:36.718313  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.729705  245557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:36:36.729784  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.741247  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.752134  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.762794  245557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:36:36.773800  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.784266  245557 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.801953  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.812569  245557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:36:36.822394  245557 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:36:36.822468  245557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:36:36.835966  245557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:36:36.845803  245557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:36:36.958625  245557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:36:37.052231  245557 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:36:37.052346  245557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:36:37.057614  245557 start.go:563] Will wait 60s for crictl version
	I0920 17:36:37.057825  245557 ssh_runner.go:195] Run: which crictl
	I0920 17:36:37.061526  245557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:36:37.105824  245557 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:36:37.105959  245557 ssh_runner.go:195] Run: crio --version
	I0920 17:36:37.136539  245557 ssh_runner.go:195] Run: crio --version
	I0920 17:36:37.171796  245557 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:36:37.173345  245557 main.go:141] libmachine: (addons-679190) Calling .GetIP
	I0920 17:36:37.176324  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:37.176764  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:37.176792  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:37.177021  245557 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:36:37.181300  245557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:36:37.194040  245557 kubeadm.go:883] updating cluster {Name:addons-679190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-679190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:36:37.194155  245557 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:36:37.194199  245557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:36:37.225234  245557 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 17:36:37.225302  245557 ssh_runner.go:195] Run: which lz4
	I0920 17:36:37.229191  245557 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 17:36:37.233185  245557 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 17:36:37.233226  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 17:36:38.392285  245557 crio.go:462] duration metric: took 1.163136107s to copy over tarball
	I0920 17:36:38.392376  245557 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 17:36:40.499360  245557 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.106950323s)
	I0920 17:36:40.499391  245557 crio.go:469] duration metric: took 2.107072401s to extract the tarball
	I0920 17:36:40.499401  245557 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 17:36:40.535110  245557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:36:40.583829  245557 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:36:40.583859  245557 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:36:40.583871  245557 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.31.1 crio true true} ...
	I0920 17:36:40.584018  245557 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-679190 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-679190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:36:40.584106  245557 ssh_runner.go:195] Run: crio config
	I0920 17:36:40.641090  245557 cni.go:84] Creating CNI manager for ""
	I0920 17:36:40.641113  245557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 17:36:40.641123  245557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:36:40.641149  245557 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-679190 NodeName:addons-679190 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:36:40.641304  245557 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-679190"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:36:40.641382  245557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:36:40.652528  245557 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:36:40.652607  245557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 17:36:40.663453  245557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 17:36:40.681121  245557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:36:40.698855  245557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0920 17:36:40.717572  245557 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0920 17:36:40.721648  245557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:36:40.733213  245557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:36:40.847265  245557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:36:40.863856  245557 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190 for IP: 192.168.39.158
	I0920 17:36:40.863898  245557 certs.go:194] generating shared ca certs ...
	I0920 17:36:40.863925  245557 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:40.864134  245557 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 17:36:41.007978  245557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt ...
	I0920 17:36:41.008017  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt: {Name:mkbb1e3a51019c4e83406d8748ea8210552ea552 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.008221  245557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key ...
	I0920 17:36:41.008234  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key: {Name:mk2dcada8581decbc501b050c6a03f21e66e112a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.008308  245557 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 17:36:41.129733  245557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt ...
	I0920 17:36:41.129766  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt: {Name:mke04674cac70a8962a647c3804e5e99b455bf6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.129942  245557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key ...
	I0920 17:36:41.129953  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key: {Name:mkb6f1f78834acbea54fe32363e27f933f4228ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.130023  245557 certs.go:256] generating profile certs ...
	I0920 17:36:41.130084  245557 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.key
	I0920 17:36:41.130099  245557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt with IP's: []
	I0920 17:36:41.201155  245557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt ...
	I0920 17:36:41.201188  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: {Name:mk1833d3bbb2c8e05579222e591c1458c577f545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.201349  245557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.key ...
	I0920 17:36:41.201360  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.key: {Name:mkace5ffe93f144a352a62d890af2292b0d676e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.201423  245557 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key.83bf7d9f
	I0920 17:36:41.201440  245557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt.83bf7d9f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.158]
	I0920 17:36:41.370047  245557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt.83bf7d9f ...
	I0920 17:36:41.370080  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt.83bf7d9f: {Name:mkf5b06795843289171f8aec4b7922bbb13be891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.370249  245557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key.83bf7d9f ...
	I0920 17:36:41.370262  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key.83bf7d9f: {Name:mka349d2513fe2d14b9ca6aa0bfa8d7a73378d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.370335  245557 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt.83bf7d9f -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt
	I0920 17:36:41.370407  245557 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key.83bf7d9f -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key
	I0920 17:36:41.370452  245557 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.key
	I0920 17:36:41.370468  245557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.crt with IP's: []
	I0920 17:36:41.587021  245557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.crt ...
	I0920 17:36:41.587061  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.crt: {Name:mkfc4b71c33e958d6677e7223f0b780b75e49b3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.587221  245557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.key ...
	I0920 17:36:41.587234  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.key: {Name:mk3aa7527b80ede87bad50a2915cf2799293254d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.587394  245557 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:36:41.587429  245557 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 17:36:41.587456  245557 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:36:41.587475  245557 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 17:36:41.588059  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:36:41.613973  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:36:41.636373  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:36:41.669307  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 17:36:41.693224  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 17:36:41.716434  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:36:41.739030  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:36:41.761987  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:36:41.785735  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:36:41.808837  245557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:36:41.824917  245557 ssh_runner.go:195] Run: openssl version
	I0920 17:36:41.830533  245557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:36:41.841288  245557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:36:41.845628  245557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:36:41.845706  245557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:36:41.851639  245557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:36:41.864422  245557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:36:41.868781  245557 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:36:41.868858  245557 kubeadm.go:392] StartCluster: {Name:addons-679190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-679190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:36:41.868969  245557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 17:36:41.869033  245557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 17:36:41.908638  245557 cri.go:89] found id: ""
	I0920 17:36:41.908716  245557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:36:41.918913  245557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:36:41.929048  245557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:36:41.939489  245557 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:36:41.939517  245557 kubeadm.go:157] found existing configuration files:
	
	I0920 17:36:41.939604  245557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:36:41.948942  245557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:36:41.949013  245557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:36:41.958442  245557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:36:41.967545  245557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:36:41.967615  245557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:36:41.977594  245557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:36:41.987246  245557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:36:41.987350  245557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:36:41.997309  245557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:36:42.006453  245557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:36:42.006522  245557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:36:42.016044  245557 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 17:36:42.080202  245557 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 17:36:42.080363  245557 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 17:36:42.176051  245557 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 17:36:42.176190  245557 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 17:36:42.176291  245557 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 17:36:42.188037  245557 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 17:36:42.196848  245557 out.go:235]   - Generating certificates and keys ...
	I0920 17:36:42.196960  245557 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 17:36:42.197037  245557 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 17:36:42.434562  245557 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 17:36:42.521395  245557 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 17:36:42.607758  245557 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 17:36:42.669378  245557 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 17:36:42.904167  245557 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 17:36:42.904374  245557 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-679190 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0920 17:36:43.188202  245557 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 17:36:43.188434  245557 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-679190 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0920 17:36:43.287638  245557 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 17:36:43.473845  245557 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 17:36:43.593299  245557 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 17:36:43.593384  245557 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 17:36:43.987222  245557 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 17:36:44.336150  245557 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 17:36:44.457367  245557 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 17:36:44.695860  245557 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 17:36:44.844623  245557 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 17:36:44.845027  245557 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 17:36:44.847431  245557 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 17:36:44.849263  245557 out.go:235]   - Booting up control plane ...
	I0920 17:36:44.849358  245557 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 17:36:44.849439  245557 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 17:36:44.849514  245557 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 17:36:44.866081  245557 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 17:36:44.873618  245557 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 17:36:44.873725  245557 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 17:36:44.992494  245557 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 17:36:44.992682  245557 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 17:36:45.493964  245557 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.923125ms
	I0920 17:36:45.494050  245557 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 17:36:50.993341  245557 kubeadm.go:310] [api-check] The API server is healthy after 5.503314416s
	I0920 17:36:51.014477  245557 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 17:36:51.035005  245557 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 17:36:51.064511  245557 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 17:36:51.064710  245557 kubeadm.go:310] [mark-control-plane] Marking the node addons-679190 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 17:36:51.081848  245557 kubeadm.go:310] [bootstrap-token] Using token: r0jau5.grdtbm10vjda8jxv
	I0920 17:36:51.083289  245557 out.go:235]   - Configuring RBAC rules ...
	I0920 17:36:51.083448  245557 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 17:36:51.089533  245557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 17:36:51.109444  245557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 17:36:51.114960  245557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 17:36:51.119855  245557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 17:36:51.128234  245557 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 17:36:51.412359  245557 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 17:36:51.848915  245557 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 17:36:52.420530  245557 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 17:36:52.421383  245557 kubeadm.go:310] 
	I0920 17:36:52.421451  245557 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 17:36:52.421460  245557 kubeadm.go:310] 
	I0920 17:36:52.421602  245557 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 17:36:52.421621  245557 kubeadm.go:310] 
	I0920 17:36:52.421658  245557 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 17:36:52.421740  245557 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 17:36:52.421795  245557 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 17:36:52.421807  245557 kubeadm.go:310] 
	I0920 17:36:52.421870  245557 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 17:36:52.421881  245557 kubeadm.go:310] 
	I0920 17:36:52.421965  245557 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 17:36:52.421977  245557 kubeadm.go:310] 
	I0920 17:36:52.422055  245557 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 17:36:52.422173  245557 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 17:36:52.422286  245557 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 17:36:52.422300  245557 kubeadm.go:310] 
	I0920 17:36:52.422432  245557 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 17:36:52.422559  245557 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 17:36:52.422572  245557 kubeadm.go:310] 
	I0920 17:36:52.422674  245557 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r0jau5.grdtbm10vjda8jxv \
	I0920 17:36:52.422819  245557 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 17:36:52.422877  245557 kubeadm.go:310] 	--control-plane 
	I0920 17:36:52.422886  245557 kubeadm.go:310] 
	I0920 17:36:52.422961  245557 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 17:36:52.422970  245557 kubeadm.go:310] 
	I0920 17:36:52.423049  245557 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r0jau5.grdtbm10vjda8jxv \
	I0920 17:36:52.423143  245557 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 17:36:52.423925  245557 kubeadm.go:310] W0920 17:36:42.058037     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:36:52.424286  245557 kubeadm.go:310] W0920 17:36:42.059124     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:36:52.424412  245557 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:36:52.424454  245557 cni.go:84] Creating CNI manager for ""
	I0920 17:36:52.424467  245557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 17:36:52.426470  245557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 17:36:52.427945  245557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 17:36:52.438400  245557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 17:36:52.456765  245557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 17:36:52.456859  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:52.456882  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-679190 minikube.k8s.io/updated_at=2024_09_20T17_36_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=addons-679190 minikube.k8s.io/primary=true
	I0920 17:36:52.484735  245557 ops.go:34] apiserver oom_adj: -16
	I0920 17:36:52.608325  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:53.108755  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:53.609368  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:54.109047  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:54.608496  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:55.109057  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:55.608759  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:56.108486  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:56.176721  245557 kubeadm.go:1113] duration metric: took 3.719930405s to wait for elevateKubeSystemPrivileges
	I0920 17:36:56.176772  245557 kubeadm.go:394] duration metric: took 14.307920068s to StartCluster
	I0920 17:36:56.176799  245557 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:56.176943  245557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:36:56.177302  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:56.177559  245557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 17:36:56.177585  245557 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:36:56.177698  245557 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 17:36:56.177839  245557 addons.go:69] Setting yakd=true in profile "addons-679190"
	I0920 17:36:56.177853  245557 config.go:182] Loaded profile config "addons-679190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:36:56.177868  245557 addons.go:69] Setting metrics-server=true in profile "addons-679190"
	I0920 17:36:56.177883  245557 addons.go:234] Setting addon metrics-server=true in "addons-679190"
	I0920 17:36:56.177861  245557 addons.go:69] Setting inspektor-gadget=true in profile "addons-679190"
	I0920 17:36:56.177860  245557 addons.go:234] Setting addon yakd=true in "addons-679190"
	I0920 17:36:56.177925  245557 addons.go:234] Setting addon inspektor-gadget=true in "addons-679190"
	I0920 17:36:56.177941  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.177951  245557 addons.go:69] Setting registry=true in profile "addons-679190"
	I0920 17:36:56.177965  245557 addons.go:234] Setting addon registry=true in "addons-679190"
	I0920 17:36:56.177977  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.177987  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.177981  245557 addons.go:69] Setting default-storageclass=true in profile "addons-679190"
	I0920 17:36:56.178004  245557 addons.go:69] Setting storage-provisioner=true in profile "addons-679190"
	I0920 17:36:56.178010  245557 addons.go:69] Setting gcp-auth=true in profile "addons-679190"
	I0920 17:36:56.178020  245557 addons.go:234] Setting addon storage-provisioner=true in "addons-679190"
	I0920 17:36:56.178034  245557 addons.go:69] Setting cloud-spanner=true in profile "addons-679190"
	I0920 17:36:56.178041  245557 mustload.go:65] Loading cluster: addons-679190
	I0920 17:36:56.178050  245557 addons.go:234] Setting addon cloud-spanner=true in "addons-679190"
	I0920 17:36:56.178062  245557 addons.go:69] Setting volcano=true in profile "addons-679190"
	I0920 17:36:56.178075  245557 addons.go:234] Setting addon volcano=true in "addons-679190"
	I0920 17:36:56.178081  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178094  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178175  245557 addons.go:69] Setting ingress-dns=true in profile "addons-679190"
	I0920 17:36:56.178199  245557 addons.go:234] Setting addon ingress-dns=true in "addons-679190"
	I0920 17:36:56.178209  245557 config.go:182] Loaded profile config "addons-679190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:36:56.178245  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.177943  245557 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-679190"
	I0920 17:36:56.178273  245557 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-679190"
	I0920 17:36:56.178311  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178483  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178495  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178513  245557 addons.go:69] Setting volumesnapshots=true in profile "addons-679190"
	I0920 17:36:56.178526  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178532  245557 addons.go:234] Setting addon volumesnapshots=true in "addons-679190"
	I0920 17:36:56.178545  245557 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-679190"
	I0920 17:36:56.178483  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178588  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178533  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178643  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.177984  245557 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-679190"
	I0920 17:36:56.178689  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178694  245557 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-679190"
	I0920 17:36:56.178699  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178709  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178728  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178810  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178879  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178557  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.179064  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178020  245557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-679190"
	I0920 17:36:56.179099  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178495  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.179247  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.177991  245557 addons.go:69] Setting ingress=true in profile "addons-679190"
	I0920 17:36:56.179277  245557 addons.go:234] Setting addon ingress=true in "addons-679190"
	I0920 17:36:56.179294  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.179319  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.177934  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178678  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.179597  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178052  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178588  245557 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-679190"
	I0920 17:36:56.180168  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.180484  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.180521  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.180557  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.180594  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.180778  245557 out.go:177] * Verifying Kubernetes components...
	I0920 17:36:56.182381  245557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:36:56.198770  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38321
	I0920 17:36:56.199048  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0920 17:36:56.199209  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34951
	I0920 17:36:56.199459  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.199673  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.199690  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.199783  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36793
	I0920 17:36:56.199983  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.200000  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.200323  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.200418  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.200444  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.200491  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.200938  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.200956  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.201021  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.201085  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.201102  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.201191  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.201335  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.201376  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.201425  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.201697  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.201767  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I0920 17:36:56.202320  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.202366  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.202504  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.202548  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.202622  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.203097  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.203117  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.203410  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.203946  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.203983  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.206753  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.206798  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.207297  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.207335  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.208443  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.208479  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.211454  245557 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-679190"
	I0920 17:36:56.211505  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.211883  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.211928  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.216373  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0920 17:36:56.216884  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.217519  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.217551  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.217867  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.218517  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.218557  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.220437  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I0920 17:36:56.220844  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.221299  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.221320  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.221697  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.222270  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.222325  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.232194  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0920 17:36:56.232900  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.233545  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.233577  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.233996  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.234205  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.235969  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.236411  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.236459  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.241821  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0920 17:36:56.242381  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.242943  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.242972  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.243334  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.243565  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.246291  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0920 17:36:56.246829  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.247399  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.247419  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.247811  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.248026  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I0920 17:36:56.248056  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.248707  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37839
	I0920 17:36:56.249331  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.249815  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.249832  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.250336  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.250958  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.251000  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.251218  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45981
	I0920 17:36:56.251950  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.252204  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I0920 17:36:56.254426  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.255080  245557 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 17:36:56.256399  245557 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:36:56.256419  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 17:36:56.256441  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.256553  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38415
	I0920 17:36:56.257740  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0920 17:36:56.257771  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0920 17:36:56.257868  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.257981  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.258005  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.258066  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.258116  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.258582  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.258601  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.258760  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.258780  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.258948  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.258963  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.259056  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.259091  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.259220  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.259239  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.259287  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.259455  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.259471  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.259756  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.259781  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.259825  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.259847  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46059
	I0920 17:36:56.259938  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.259979  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.260044  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.260089  245557 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 17:36:56.260188  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.260211  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.260684  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.260693  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.261178  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.261337  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.261375  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.261393  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.261456  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.261758  245557 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 17:36:56.261779  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 17:36:56.261799  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.262451  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.263727  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.263750  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.264542  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.264766  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.265197  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.265267  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.265425  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.265871  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.266255  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.266942  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.266966  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.267051  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.267450  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.267450  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.267501  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.267624  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.267627  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.267797  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.267885  245557 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 17:36:56.267955  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.268130  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.268477  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.268931  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:36:56.268949  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:36:56.269155  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.269223  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:36:56.269254  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:36:56.269261  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:36:56.269269  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:36:56.269276  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:36:56.270959  245557 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 17:36:56.272219  245557 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 17:36:56.272238  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 17:36:56.272258  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.272345  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:36:56.272373  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:36:56.272384  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 17:36:56.272473  245557 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 17:36:56.272957  245557 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 17:36:56.274416  245557 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 17:36:56.274435  245557 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 17:36:56.274458  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.278501  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.278771  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.278948  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.278966  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.279120  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.279308  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.279334  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.279356  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.279460  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.279524  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.279789  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.279989  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.280167  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.280659  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.283752  245557 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0920 17:36:56.285194  245557 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 17:36:56.285213  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 17:36:56.285236  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.287357  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46099
	I0920 17:36:56.288285  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.288568  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I0920 17:36:56.289501  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.289586  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.289657  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.289686  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.290284  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.290294  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.290306  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.290351  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.290365  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.290723  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.290770  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.290785  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.290986  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.291421  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.291439  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.291683  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.291884  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.292442  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0920 17:36:56.292861  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.293333  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.293359  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.293708  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.294249  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.294285  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.296604  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0920 17:36:56.297114  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.297352  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33903
	I0920 17:36:56.297691  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.297708  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.297880  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.298156  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.300334  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I0920 17:36:56.300338  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.300443  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.300462  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.300908  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.301356  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.301512  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.301528  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.301592  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.301994  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.302179  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.302935  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.304975  245557 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 17:36:56.305417  245557 addons.go:234] Setting addon default-storageclass=true in "addons-679190"
	I0920 17:36:56.305487  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.305884  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.305971  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.306205  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.306459  245557 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 17:36:56.306481  245557 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 17:36:56.306510  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.308001  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 17:36:56.309187  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41931
	I0920 17:36:56.309544  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 17:36:56.309565  245557 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 17:36:56.309594  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.309653  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38235
	I0920 17:36:56.310121  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.310680  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.310703  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.310778  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.311300  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.311383  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.311411  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.311552  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.311837  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.312758  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.312848  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.313699  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.314421  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.314691  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.314716  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.314740  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.314836  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.315154  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.315267  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.315378  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.317370  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.317387  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.317414  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37403
	I0920 17:36:56.317504  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37673
	I0920 17:36:56.317566  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.318020  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.318021  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.318113  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.318435  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.318527  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.318548  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.318565  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.318581  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.318605  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.318834  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35473
	I0920 17:36:56.318913  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.319073  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.319167  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.319300  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.319543  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.319811  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.319835  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.319903  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 17:36:56.320366  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.320592  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.321679  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.321733  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.322512  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.322733  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 17:36:56.323531  245557 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 17:36:56.323540  245557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 17:36:56.324345  245557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:36:56.325100  245557 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 17:36:56.325122  245557 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 17:36:56.325140  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.326093  245557 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:36:56.326112  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:36:56.326130  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.326232  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 17:36:56.326618  245557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:36:56.328288  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 17:36:56.328983  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.329264  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.329496  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.329527  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.329672  245557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:36:56.329714  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.329725  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.329730  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.329856  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.329944  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.330097  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.330102  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.330283  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.330461  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.330467  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.331167  245557 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 17:36:56.331190  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 17:36:56.331208  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.331686  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 17:36:56.333392  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 17:36:56.334233  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I0920 17:36:56.334709  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.334787  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.335248  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.335265  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.335335  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.335354  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.335411  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.335560  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.335619  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.335685  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.335796  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.335834  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.336064  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 17:36:56.337255  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.338333  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 17:36:56.338361  245557 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 17:36:56.338435  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36195
	I0920 17:36:56.338790  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.339334  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.339351  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.339451  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 17:36:56.339481  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 17:36:56.339503  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.339705  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.340127  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.340219  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.342208  245557 out.go:177]   - Using image docker.io/busybox:stable
	I0920 17:36:56.342943  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.343386  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.343420  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.343599  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.343794  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.343962  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.344071  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.344485  245557 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:36:56.344506  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 17:36:56.344523  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.347682  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.348119  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.348141  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.348318  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.348498  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.348620  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.348730  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	W0920 17:36:56.350081  245557 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37614->192.168.39.158:22: read: connection reset by peer
	I0920 17:36:56.350113  245557 retry.go:31] will retry after 277.419822ms: ssh: handshake failed: read tcp 192.168.39.1:37614->192.168.39.158:22: read: connection reset by peer
	I0920 17:36:56.358579  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
	I0920 17:36:56.359069  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.359542  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.359571  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.359910  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.360078  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.361619  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.361824  245557 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:36:56.361842  245557 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:36:56.361860  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.364857  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.365235  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.365271  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.365432  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.365644  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.365803  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.365981  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	W0920 17:36:56.368948  245557 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37630->192.168.39.158:22: read: connection reset by peer
	I0920 17:36:56.368974  245557 retry.go:31] will retry after 189.220194ms: ssh: handshake failed: read tcp 192.168.39.1:37630->192.168.39.158:22: read: connection reset by peer
	I0920 17:36:56.558562  245557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:36:56.558883  245557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:36:56.674915  245557 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 17:36:56.674949  245557 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 17:36:56.736424  245557 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 17:36:56.736462  245557 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 17:36:56.738918  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 17:36:56.740403  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:36:56.779127  245557 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 17:36:56.779166  245557 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 17:36:56.785790  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:36:56.816546  245557 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 17:36:56.816572  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 17:36:56.818607  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 17:36:56.833977  245557 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 17:36:56.834015  245557 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 17:36:56.925219  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:36:56.958576  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 17:36:57.000786  245557 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:36:57.000810  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 17:36:57.009273  245557 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 17:36:57.009317  245557 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 17:36:57.024743  245557 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 17:36:57.024770  245557 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 17:36:57.043071  245557 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 17:36:57.043099  245557 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 17:36:57.092905  245557 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 17:36:57.092942  245557 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 17:36:57.127201  245557 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 17:36:57.127236  245557 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 17:36:57.158480  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:36:57.178499  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:36:57.215557  245557 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 17:36:57.215592  245557 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 17:36:57.238838  245557 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 17:36:57.238870  245557 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 17:36:57.247948  245557 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:36:57.247973  245557 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 17:36:57.272793  245557 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 17:36:57.272831  245557 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 17:36:57.292572  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 17:36:57.292600  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 17:36:57.414471  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 17:36:57.414500  245557 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 17:36:57.441852  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:36:57.459384  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 17:36:57.459417  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 17:36:57.459605  245557 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:36:57.459635  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 17:36:57.487179  245557 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 17:36:57.487211  245557 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 17:36:57.600664  245557 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:36:57.600691  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 17:36:57.665586  245557 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 17:36:57.665618  245557 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 17:36:57.669993  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:36:57.692412  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 17:36:57.692454  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 17:36:57.777267  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:36:57.878278  245557 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:36:57.878309  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 17:36:57.884855  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 17:36:57.884886  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 17:36:57.939139  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:36:58.166138  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 17:36:58.166167  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 17:36:58.648447  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 17:36:58.648486  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 17:36:58.703324  245557 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.144399607s)
	I0920 17:36:58.703358  245557 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.144764949s)
	I0920 17:36:58.703371  245557 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 17:36:58.704166  245557 node_ready.go:35] waiting up to 6m0s for node "addons-679190" to be "Ready" ...
	I0920 17:36:58.710465  245557 node_ready.go:49] node "addons-679190" has status "Ready":"True"
	I0920 17:36:58.710493  245557 node_ready.go:38] duration metric: took 6.288327ms for node "addons-679190" to be "Ready" ...
	I0920 17:36:58.710503  245557 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:36:58.723116  245557 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace to be "Ready" ...
	I0920 17:36:59.028902  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 17:36:59.028955  245557 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 17:36:59.192793  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 17:36:59.192824  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 17:36:59.212433  245557 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-679190" context rescaled to 1 replicas
	I0920 17:36:59.496357  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 17:36:59.496454  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 17:36:59.712928  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:36:59.712961  245557 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 17:37:00.075273  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:37:00.759434  245557 pod_ready.go:103] pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:02.878793  245557 pod_ready.go:103] pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:03.327281  245557 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 17:37:03.327326  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:37:03.331115  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:37:03.331744  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:37:03.331780  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:37:03.332019  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:37:03.332249  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:37:03.332520  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:37:03.332731  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:37:03.544424  245557 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 17:37:03.611249  245557 addons.go:234] Setting addon gcp-auth=true in "addons-679190"
	I0920 17:37:03.611311  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:37:03.611651  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:37:03.611695  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:37:03.627843  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35373
	I0920 17:37:03.628403  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:37:03.628939  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:37:03.628963  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:37:03.629370  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:37:03.629868  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:37:03.629917  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:37:03.647166  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I0920 17:37:03.647674  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:37:03.648222  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:37:03.648244  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:37:03.648605  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:37:03.648924  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:37:03.650642  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:37:03.650881  245557 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 17:37:03.650914  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:37:03.653472  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:37:03.653934  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:37:03.653975  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:37:03.654165  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:37:03.654379  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:37:03.654559  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:37:03.654756  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:37:04.200814  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.461859921s)
	I0920 17:37:04.200874  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.200887  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.200907  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.460468965s)
	I0920 17:37:04.200955  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.200972  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201021  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.41520737s)
	I0920 17:37:04.201047  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201055  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201068  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.382435723s)
	I0920 17:37:04.201090  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201101  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201151  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.275909746s)
	I0920 17:37:04.201167  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201168  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.242568548s)
	I0920 17:37:04.201174  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201183  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201191  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201230  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.042717172s)
	I0920 17:37:04.201247  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201255  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201259  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.022724165s)
	I0920 17:37:04.201276  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201286  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201348  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.759469305s)
	I0920 17:37:04.201367  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201375  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201450  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.5314256s)
	I0920 17:37:04.201467  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201476  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201559  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.201567  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.4242534s)
	I0920 17:37:04.201598  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	W0920 17:37:04.201608  245557 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:37:04.201637  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.201647  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.201647  245557 retry.go:31] will retry after 372.12607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:37:04.201655  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201665  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201575  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.201725  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201731  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201734  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.262556719s)
	I0920 17:37:04.201759  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201768  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201842  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.201915  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.201932  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.201952  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.201961  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.201965  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.201970  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.201977  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201983  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.202041  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.202065  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.202070  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.202077  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.202082  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.202642  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.202677  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.202684  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.202691  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.202698  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.204814  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.204838  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.204848  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.204856  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.204860  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.204874  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.204885  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.204897  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.204919  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.204946  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.204956  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.204978  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.205011  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.205017  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.205297  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.205350  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.205649  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.205667  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.205737  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.205747  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.205816  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.205822  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.205830  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.205837  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.205890  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.205929  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.205935  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.205943  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.205949  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.207122  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207136  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207181  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207197  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207230  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.207298  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207306  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207316  245557 addons.go:475] Verifying addon metrics-server=true in "addons-679190"
	I0920 17:37:04.207442  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.207454  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.207468  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207476  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207476  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207484  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207485  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.207492  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.207513  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207530  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207619  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.207638  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.207746  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.207768  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207775  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207784  245557 addons.go:475] Verifying addon registry=true in "addons-679190"
	I0920 17:37:04.209273  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.209289  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.209299  245557 addons.go:475] Verifying addon ingress=true in "addons-679190"
	I0920 17:37:04.210060  245557 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-679190 service yakd-dashboard -n yakd-dashboard
	
	I0920 17:37:04.210094  245557 out.go:177] * Verifying registry addon...
	I0920 17:37:04.211026  245557 out.go:177] * Verifying ingress addon...
	I0920 17:37:04.214177  245557 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 17:37:04.214180  245557 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 17:37:04.219984  245557 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 17:37:04.220012  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:04.232040  245557 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 17:37:04.232063  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:04.250786  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.250821  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.251111  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.251130  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 17:37:04.251227  245557 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 17:37:04.260835  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.260869  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.261164  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.261183  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.574205  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:37:04.725222  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:04.729585  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:05.249467  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:05.249466  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:05.283804  245557 pod_ready.go:103] pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:05.471390  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.396046848s)
	I0920 17:37:05.471473  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:05.471495  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:05.471416  245557 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.820510198s)
	I0920 17:37:05.471936  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:05.471953  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:05.471964  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:05.471971  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:05.472409  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:05.472432  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:05.472435  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:05.472454  245557 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-679190"
	I0920 17:37:05.473667  245557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:37:05.474639  245557 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 17:37:05.476343  245557 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 17:37:05.477417  245557 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 17:37:05.477751  245557 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 17:37:05.477771  245557 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 17:37:05.501716  245557 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 17:37:05.501756  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:05.627006  245557 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 17:37:05.627047  245557 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 17:37:05.723361  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:05.729929  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:05.777291  245557 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:37:05.777327  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 17:37:05.793054  245557 pod_ready.go:93] pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:05.793083  245557 pod_ready.go:82] duration metric: took 7.069936594s for pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.793096  245557 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jln6k" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.808093  245557 pod_ready.go:93] pod "coredns-7c65d6cfc9-jln6k" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:05.808122  245557 pod_ready.go:82] duration metric: took 15.016714ms for pod "coredns-7c65d6cfc9-jln6k" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.808135  245557 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.815411  245557 pod_ready.go:93] pod "etcd-addons-679190" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:05.815439  245557 pod_ready.go:82] duration metric: took 7.295923ms for pod "etcd-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.815451  245557 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.826707  245557 pod_ready.go:93] pod "kube-apiserver-addons-679190" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:05.826733  245557 pod_ready.go:82] duration metric: took 11.271544ms for pod "kube-apiserver-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.826746  245557 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.832864  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:37:05.843767  245557 pod_ready.go:93] pod "kube-controller-manager-addons-679190" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:05.843804  245557 pod_ready.go:82] duration metric: took 17.048824ms for pod "kube-controller-manager-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.843818  245557 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-klvxz" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.983081  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:06.137818  245557 pod_ready.go:93] pod "kube-proxy-klvxz" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:06.137858  245557 pod_ready.go:82] duration metric: took 294.032966ms for pod "kube-proxy-klvxz" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:06.137870  245557 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:06.226275  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:06.226546  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:06.672283  245557 pod_ready.go:93] pod "kube-scheduler-addons-679190" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:06.672311  245557 pod_ready.go:82] duration metric: took 534.434193ms for pod "kube-scheduler-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:06.672322  245557 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:06.676924  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:06.723323  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:06.723483  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:06.996072  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.421807501s)
	I0920 17:37:06.996136  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:06.996154  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:06.996393  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:06.996417  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:06.996426  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:06.996434  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:06.996451  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:06.996683  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:06.996693  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:06.996709  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:07.016129  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:07.083780  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.250840857s)
	I0920 17:37:07.083874  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:07.083897  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:07.084188  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:07.084212  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:07.084223  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:07.084231  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:07.084473  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:07.084497  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:07.084529  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:07.086428  245557 addons.go:475] Verifying addon gcp-auth=true in "addons-679190"
	I0920 17:37:07.089016  245557 out.go:177] * Verifying gcp-auth addon...
	I0920 17:37:07.091518  245557 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 17:37:07.134782  245557 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 17:37:07.134813  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:07.235817  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:07.236611  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:07.488734  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:07.595170  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:07.721622  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:07.723013  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:07.986730  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:08.097723  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:08.235499  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:08.236709  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:08.484933  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:08.595349  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:08.679620  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:08.720021  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:08.720047  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:08.981981  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:09.095366  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:09.219800  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:09.220283  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:09.482502  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:09.596169  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:09.718911  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:09.719167  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:09.981992  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:10.095430  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:10.218531  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:10.218998  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:10.482432  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:10.597590  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:10.965954  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:10.966262  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:11.066406  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:11.095561  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:11.179050  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:11.219287  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:11.219338  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:11.482288  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:11.595743  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:11.718737  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:11.720102  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:11.983121  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:12.096110  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:12.218665  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:12.219012  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:12.481993  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:12.595323  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:12.719728  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:12.719803  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:12.983136  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:13.095240  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:13.219271  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:13.220586  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:13.482367  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:13.594980  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:13.679290  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:13.719607  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:13.719858  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:13.982406  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:14.096116  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:14.218143  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:14.218348  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:14.481878  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:14.595473  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:14.718781  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:14.719599  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:14.983016  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:15.098359  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:15.218446  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:15.219530  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:15.482914  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:15.596240  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:15.680026  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:15.718767  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:15.719239  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:15.982692  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:16.097226  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:16.218654  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:16.219309  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:16.482137  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:16.595706  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:16.719296  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:16.719734  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:16.981704  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:17.095888  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:17.218140  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:17.219734  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:17.481807  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:17.596056  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:17.720484  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:17.720855  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:17.982237  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:18.095526  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:18.179237  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:18.219858  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:18.220498  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:18.482532  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:18.595184  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:18.719021  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:18.719806  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:18.982493  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:19.096098  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:19.218547  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:19.219496  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:19.482320  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:19.595193  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:19.719166  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:19.720319  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:19.982325  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:20.367324  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:20.367356  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:20.367706  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:20.370491  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:20.482782  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:20.595136  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:20.718920  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:20.719192  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:20.981947  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:21.095534  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:21.218869  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:21.219583  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:21.482550  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:21.595874  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:21.719021  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:21.719268  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:21.982430  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:22.095030  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:22.219384  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:22.219891  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:22.482224  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:22.595405  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:22.679943  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:22.718958  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:22.719141  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:22.982733  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:23.096227  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:23.219788  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:23.220067  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:23.483912  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:23.595388  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:23.718637  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:23.719016  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:23.982130  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:24.095662  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:24.218721  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:24.219057  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:24.482535  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:24.595793  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:24.678781  245557 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:24.678812  245557 pod_ready.go:82] duration metric: took 18.006481882s for pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:24.678822  245557 pod_ready.go:39] duration metric: took 25.968303705s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:37:24.678872  245557 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:37:24.678948  245557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:37:24.702218  245557 api_server.go:72] duration metric: took 28.524587153s to wait for apiserver process to appear ...
	I0920 17:37:24.702254  245557 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:37:24.702293  245557 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0920 17:37:24.706595  245557 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0920 17:37:24.707660  245557 api_server.go:141] control plane version: v1.31.1
	I0920 17:37:24.707685  245557 api_server.go:131] duration metric: took 5.422585ms to wait for apiserver health ...
	I0920 17:37:24.707694  245557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:37:24.715504  245557 system_pods.go:59] 17 kube-system pods found
	I0920 17:37:24.715541  245557 system_pods.go:61] "coredns-7c65d6cfc9-dsxdk" [3371b6ad-8f6e-4474-a677-f07c0b4e0a38] Running
	I0920 17:37:24.715552  245557 system_pods.go:61] "csi-hostpath-attacher-0" [1630eb87-6fea-4510-8b0d-cb108c179963] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:37:24.715563  245557 system_pods.go:61] "csi-hostpath-resizer-0" [9b98474b-c72c-4230-973c-a76ed4f731c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:37:24.715573  245557 system_pods.go:61] "csi-hostpathplugin-9m9gc" [00f39caf-3478-4abb-922e-28239885d7bf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:37:24.715580  245557 system_pods.go:61] "etcd-addons-679190" [4d3ed97e-c5a9-4017-86dd-68689e55e1f0] Running
	I0920 17:37:24.715586  245557 system_pods.go:61] "kube-apiserver-addons-679190" [24fac84a-44d2-4e96-8680-606874e6b5bb] Running
	I0920 17:37:24.715591  245557 system_pods.go:61] "kube-controller-manager-addons-679190" [5dec7f61-8787-4c49-8f6f-998e2dbc01cb] Running
	I0920 17:37:24.715597  245557 system_pods.go:61] "kube-ingress-dns-minikube" [1a3b7852-a919-4f95-9e5c-20ead0de76ad] Running
	I0920 17:37:24.715603  245557 system_pods.go:61] "kube-proxy-klvxz" [6edcd5de-35eb-4e5b-8073-e2a49428b300] Running
	I0920 17:37:24.715609  245557 system_pods.go:61] "kube-scheduler-addons-679190" [143ec669-777c-495c-80f3-a792643b75e8] Running
	I0920 17:37:24.715619  245557 system_pods.go:61] "metrics-server-84c5f94fbc-fj4mf" [adb63308-9d43-444e-b31b-a5efeef5d323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:37:24.715625  245557 system_pods.go:61] "nvidia-device-plugin-daemonset-b5wj9" [eb9faaf1-05e4-4f88-abbb-479f222d2664] Running
	I0920 17:37:24.715637  245557 system_pods.go:61] "registry-66c9cd494c-7g6lm" [4ad8ab0b-f43b-475a-984c-11d2a23963c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 17:37:24.715646  245557 system_pods.go:61] "registry-proxy-k96rm" [0612b678-15da-44d6-acfb-c29dd8dd2b7d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 17:37:24.715678  245557 system_pods.go:61] "snapshot-controller-56fcc65765-5qmkt" [a4b880c1-bd61-45bb-80cf-ae1dd6af7d4e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:37:24.715689  245557 system_pods.go:61] "snapshot-controller-56fcc65765-cwbl2" [6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:37:24.715696  245557 system_pods.go:61] "storage-provisioner" [339440d6-4355-4e26-a436-2edefb4d7b9d] Running
	I0920 17:37:24.715707  245557 system_pods.go:74] duration metric: took 8.003633ms to wait for pod list to return data ...
	I0920 17:37:24.715719  245557 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:37:24.719187  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:24.719879  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:24.721134  245557 default_sa.go:45] found service account: "default"
	I0920 17:37:24.721158  245557 default_sa.go:55] duration metric: took 5.428135ms for default service account to be created ...
	I0920 17:37:24.721168  245557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:37:24.730946  245557 system_pods.go:86] 17 kube-system pods found
	I0920 17:37:24.730977  245557 system_pods.go:89] "coredns-7c65d6cfc9-dsxdk" [3371b6ad-8f6e-4474-a677-f07c0b4e0a38] Running
	I0920 17:37:24.730988  245557 system_pods.go:89] "csi-hostpath-attacher-0" [1630eb87-6fea-4510-8b0d-cb108c179963] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:37:24.730995  245557 system_pods.go:89] "csi-hostpath-resizer-0" [9b98474b-c72c-4230-973c-a76ed4f731c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:37:24.731005  245557 system_pods.go:89] "csi-hostpathplugin-9m9gc" [00f39caf-3478-4abb-922e-28239885d7bf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:37:24.731009  245557 system_pods.go:89] "etcd-addons-679190" [4d3ed97e-c5a9-4017-86dd-68689e55e1f0] Running
	I0920 17:37:24.731014  245557 system_pods.go:89] "kube-apiserver-addons-679190" [24fac84a-44d2-4e96-8680-606874e6b5bb] Running
	I0920 17:37:24.731017  245557 system_pods.go:89] "kube-controller-manager-addons-679190" [5dec7f61-8787-4c49-8f6f-998e2dbc01cb] Running
	I0920 17:37:24.731021  245557 system_pods.go:89] "kube-ingress-dns-minikube" [1a3b7852-a919-4f95-9e5c-20ead0de76ad] Running
	I0920 17:37:24.731024  245557 system_pods.go:89] "kube-proxy-klvxz" [6edcd5de-35eb-4e5b-8073-e2a49428b300] Running
	I0920 17:37:24.731027  245557 system_pods.go:89] "kube-scheduler-addons-679190" [143ec669-777c-495c-80f3-a792643b75e8] Running
	I0920 17:37:24.731031  245557 system_pods.go:89] "metrics-server-84c5f94fbc-fj4mf" [adb63308-9d43-444e-b31b-a5efeef5d323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:37:24.731036  245557 system_pods.go:89] "nvidia-device-plugin-daemonset-b5wj9" [eb9faaf1-05e4-4f88-abbb-479f222d2664] Running
	I0920 17:37:24.731041  245557 system_pods.go:89] "registry-66c9cd494c-7g6lm" [4ad8ab0b-f43b-475a-984c-11d2a23963c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 17:37:24.731047  245557 system_pods.go:89] "registry-proxy-k96rm" [0612b678-15da-44d6-acfb-c29dd8dd2b7d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 17:37:24.731053  245557 system_pods.go:89] "snapshot-controller-56fcc65765-5qmkt" [a4b880c1-bd61-45bb-80cf-ae1dd6af7d4e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:37:24.731061  245557 system_pods.go:89] "snapshot-controller-56fcc65765-cwbl2" [6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:37:24.731065  245557 system_pods.go:89] "storage-provisioner" [339440d6-4355-4e26-a436-2edefb4d7b9d] Running
	I0920 17:37:24.731073  245557 system_pods.go:126] duration metric: took 9.894741ms to wait for k8s-apps to be running ...
	I0920 17:37:24.731083  245557 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:37:24.731128  245557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:37:24.756246  245557 system_svc.go:56] duration metric: took 25.149435ms WaitForService to wait for kubelet
	I0920 17:37:24.756281  245557 kubeadm.go:582] duration metric: took 28.578660436s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:37:24.756309  245557 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:37:24.759977  245557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:37:24.760008  245557 node_conditions.go:123] node cpu capacity is 2
	I0920 17:37:24.760024  245557 node_conditions.go:105] duration metric: took 3.709037ms to run NodePressure ...
	I0920 17:37:24.760039  245557 start.go:241] waiting for startup goroutines ...
	I0920 17:37:24.982102  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:25.692769  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:25.692898  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:25.693075  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:25.695274  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:25.792088  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:25.792245  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:25.792632  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:25.985189  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:26.096256  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:26.218968  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:26.219398  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:26.483148  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:26.596907  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:26.720942  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:26.723460  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:26.984520  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:27.096894  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:27.220406  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:27.220769  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:27.484078  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:27.595695  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:27.720606  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:27.721639  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:27.982903  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:28.095938  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:28.219987  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:28.220971  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:28.481389  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:28.610426  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:28.719978  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:28.720142  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:28.983078  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:29.095989  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:29.218709  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:29.218930  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:29.482101  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:29.595615  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:29.719088  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:29.719199  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:29.982791  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:30.095599  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:30.217922  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:30.218986  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:30.482436  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:30.595942  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:30.718190  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:30.719931  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:30.981672  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:31.095251  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:31.219963  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:31.221121  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:31.482257  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:31.595251  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:31.720133  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:31.720333  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:31.982198  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:32.096412  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:32.219862  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:32.219953  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:32.482618  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:32.594901  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:32.719007  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:32.719260  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:32.982391  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:33.401207  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:33.401585  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:33.401765  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:33.483109  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:33.596749  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:33.720837  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:33.721022  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:33.982168  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:34.096308  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:34.218951  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:34.219384  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:34.482769  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:34.598370  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:34.720347  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:34.720439  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:34.982322  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:35.095917  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:35.219540  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:35.219941  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:35.487140  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:35.594855  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:35.718904  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:35.720766  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:35.982409  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:36.095826  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:36.219811  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:36.220538  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:36.482003  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:36.594821  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:36.719933  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:36.720068  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:36.981755  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:37.095191  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:37.219188  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:37.219358  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:37.659063  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:37.661446  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:37.720179  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:37.721456  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:37.982458  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:38.094851  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:38.218310  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:38.220085  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:38.483023  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:38.594895  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:38.722426  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:38.725839  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:38.982505  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:39.098447  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:39.218837  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:39.218838  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:39.481811  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:39.595792  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:39.718814  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:39.719320  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:39.982909  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:40.095985  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:40.218489  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:40.219278  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:40.481967  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:40.595737  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:40.718910  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:40.719238  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:40.983283  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:41.095940  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:41.219565  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:41.221013  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:41.482435  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:41.595334  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:41.720647  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:41.720684  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:41.983153  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:42.094768  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:42.220086  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:42.220394  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:42.482518  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:42.595381  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:42.720206  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:42.720419  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:42.983132  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:43.095176  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:43.219302  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:43.219642  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:43.482642  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:43.595409  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:43.721373  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:43.721632  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:43.982605  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:44.098347  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:44.219089  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:44.221006  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:44.482252  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:44.595145  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:44.719105  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:44.719297  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:44.982659  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:45.095138  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:45.219384  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:45.220254  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:45.482221  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:45.595501  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:45.718470  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:45.719300  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:45.982785  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:46.095155  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:46.219224  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:46.219462  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:46.483020  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:46.595174  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:46.719798  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:46.720503  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:46.983110  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:47.095113  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:47.219161  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:47.219467  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:47.482526  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:47.596127  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:47.719708  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:47.722386  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:47.983136  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:48.095553  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:48.219791  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:48.220277  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:48.482138  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:48.595891  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:48.719594  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:48.719903  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:48.983736  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:49.095768  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:49.218264  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:49.218364  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:49.482328  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:49.594924  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:49.720709  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:49.721038  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:49.984147  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:50.095908  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:50.218641  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:50.219557  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:50.482216  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:50.595636  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:50.718073  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:50.718453  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:50.982477  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:51.096470  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:51.218737  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:51.219071  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:51.482552  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:51.594846  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:51.719591  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:51.719929  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:51.982403  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:52.094835  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:52.219094  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:52.219308  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:52.505584  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:52.595700  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:52.722416  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:52.722934  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:52.982125  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:53.095783  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:53.219934  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:53.220724  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:53.481962  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:53.595411  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:53.719033  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:53.719514  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:53.983809  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:54.104151  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:54.218336  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:54.220411  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:54.483652  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:54.596251  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:54.719528  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:54.720220  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:55.232368  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:55.232724  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:55.233181  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:55.233391  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:55.481929  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:55.595861  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:55.718543  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:55.718985  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:55.983911  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:56.095903  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:56.220860  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:56.221898  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:56.482778  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:56.595842  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:56.718847  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:56.719100  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:56.982103  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:57.095564  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:57.218351  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:57.218561  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:57.482555  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:57.595003  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:57.719076  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:57.719278  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:57.983394  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:58.095258  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:58.218602  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:58.219149  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:58.482754  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:58.595355  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:58.719161  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:58.719321  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:58.981879  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:59.095291  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:59.219381  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:59.220178  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:59.482616  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:59.596295  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:59.719294  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:59.719426  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:59.993620  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:00.096081  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:00.219485  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:38:00.219841  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:00.482663  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:00.595396  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:00.720126  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:38:00.720694  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:00.992204  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:01.097761  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:01.219498  245557 kapi.go:107] duration metric: took 57.005316247s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 17:38:01.220075  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:01.484002  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:01.595128  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:01.719104  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:02.136461  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:02.137748  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:02.239284  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:02.484059  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:02.597924  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:02.718800  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:02.982988  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:03.095774  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:03.226940  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:03.482947  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:03.595687  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:03.718635  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:03.982370  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:04.102128  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:04.219301  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:04.483127  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:04.595106  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:04.719141  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:04.981631  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:05.101287  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:05.219055  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:05.482258  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:05.595242  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:05.718751  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:05.982406  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:06.106689  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:06.221343  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:06.482771  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:06.594811  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:06.719092  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:06.981985  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:07.097061  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:07.219541  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:07.483408  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:07.595174  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:07.719181  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:07.982038  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:08.095412  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:08.220499  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:08.484258  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:08.595950  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:08.718848  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:08.983659  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:09.095507  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:09.223029  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:09.486835  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:09.599413  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:09.719800  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:09.982147  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:10.095511  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:10.669481  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:10.669715  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:10.669778  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:10.753003  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:10.984155  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:11.096392  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:11.226061  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:11.482229  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:11.595481  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:11.719332  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:11.982116  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:12.095541  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:12.221657  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:12.482601  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:12.595013  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:12.731224  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:12.982914  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:13.095203  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:13.220342  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:13.483110  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:13.598709  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:13.718995  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:13.983441  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:14.094805  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:14.225305  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:14.482669  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:14.596239  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:14.720831  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:14.982905  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:15.095677  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:15.218902  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:15.482688  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:15.595271  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:15.752797  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:15.982814  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:16.095989  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:16.218789  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:16.482153  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:16.595532  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:16.718428  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:16.982631  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:17.095161  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:17.218895  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:17.482903  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:17.595571  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:17.720206  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:17.981706  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:18.095526  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:18.221214  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:18.483135  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:18.595497  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:18.723739  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:18.983544  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:19.096959  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:19.218551  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:19.482638  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:19.595067  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:19.718890  245557 kapi.go:107] duration metric: took 1m15.504703683s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 17:38:19.986184  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:20.098494  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:20.482419  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:20.595350  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:20.984285  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:21.095405  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:21.482801  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:21.595705  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:21.982482  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:22.095811  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:22.482263  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:22.595985  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:22.983166  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:23.095955  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:23.482802  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:23.607423  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:23.983139  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:24.095964  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:24.482710  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:24.595867  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:24.982831  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:25.095296  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:25.485376  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:25.602264  245557 kapi.go:107] duration metric: took 1m18.510746029s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 17:38:25.604077  245557 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-679190 cluster.
	I0920 17:38:25.605455  245557 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 17:38:25.607126  245557 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 17:38:25.983952  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:26.489199  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:26.984440  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:27.486001  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:27.982356  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:28.481673  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:28.985677  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:29.483232  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:29.981588  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:30.486914  245557 kapi.go:107] duration metric: took 1m25.009495563s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 17:38:30.489426  245557 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, inspektor-gadget, nvidia-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0920 17:38:30.491035  245557 addons.go:510] duration metric: took 1m34.313356496s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner inspektor-gadget nvidia-device-plugin metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0920 17:38:30.491107  245557 start.go:246] waiting for cluster config update ...
	I0920 17:38:30.491135  245557 start.go:255] writing updated cluster config ...
	I0920 17:38:30.491469  245557 ssh_runner.go:195] Run: rm -f paused
	I0920 17:38:30.547139  245557 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 17:38:30.549058  245557 out.go:177] * Done! kubectl is now configured to use "addons-679190" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.836213760Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854588836182124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8778276-90a8-41bb-bb7f-b445602e1b9d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.836839578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f942b605-a3b1-4f24-8052-e430a9926513 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.836950538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f942b605-a3b1-4f24-8052-e430a9926513 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.837335669Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a7564befb2262f7ccc92bee22dd03bfb963dd2b593f58cd61501d7aaa6eeb97,PodSandboxId:f9cf5175fff467a5d0e91ee68f7d277bf862a45e11cb39fffb6ee3614ead9923,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726854581495856579,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-xfq9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e1ea699-e231-467a-a0d1-75143d1036b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19b33ea2ca1e95f3cf6352959a33b4097f8b4afb2a99285d44c77b250f277153,PodSandboxId:baa7b5cea9fa13bed223540120ceec73698806159aa49c33bd266d18b3ec5d0b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726854442139284452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 719ce5c1-7853-4fc9-8fd3-7725aba7ed0c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0b4062645fc2c1c8c82cdce410360489fe6600fb411ea6d60c712a5c12813f,PodSandboxId:5339d36289fab846d220fa9edfa1af3bbc0ffda6cd68845caeceb1aa176d74b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726853904701573245,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-58447,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 6925f58e-54c8-43f8-893e-4ff8a6a84707,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3825f7af126e587a79dfe6d3c64f647f4ccb761c8df843dbb144c54906de5bed,PodSandboxId:18386a0016b7ddae0aedd789d358584cdada2d1f8b42771e39fcc4d6cdf1aacf,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726853883039393792,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-44nll,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3ec98204-6d97-4ac3-a7a9-d53c47f3ab50,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23688c0ebf707d13467884ca61445010331f6a8ec609486dddc1e941625565b6,PodSandboxId:697ab83fd20d470579c5d40bd8e40020a2bc863a8ec4285b53adf170408a43d0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726853882399572406,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-85mv4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a62c0948-1d28-462f-9fa5-104e567a74d7,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f37f4284c136c5a93b735b37ed0979ddbd084e8586efb69fd840601eea6e9b2d,PodSandboxId:0e75080753c376351238a32172857b9850af59e0803ed5f26aa7a13beb06c7fe,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726853848562042456,Labels:map[string]string{io.kubernetes.co
ntainer.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-vcxc2,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 7fc94b7c-a858-4af7-9355-2a81abf00a96,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33bb418967f323981293619c300cd658dfb1efb2617606f6872ec9443138d8d4,PodSandboxId:d86e657235d1aee688b3d4777827dc899fbf0d085c23dbc7f847861897fb0987,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,Creat
edAt:1726853845803445826,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-fj4mf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb63308-9d43-444e-b31b-a5efeef5d323,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c860a4c507c477109d4747b1b074c68a4a31b9aeee2cfc9591edda6f92a49c41,PodSandboxId:5c51ff7a22efee9bfed73a4683dfae61105461d71e774cd60d35b169d58701f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726853822536970769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339440d6-4355-4e26-a436-2edefb4d7b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56bab5bc8aac1da4daf358eadce72c458a49fba19d4c18106120004ede4b716,PodSandboxId:6cb2d547f55f043965d896f833e8e278cd9ea490c81edd68102d7c2c5eb333bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e
956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726853820362351318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dsxdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371b6ad-8f6e-4474-a677-f07c0b4e0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92cf3212ca3856ac30692de35be4bf7391dbf53d3b71d366bbd05e33353b54b5,PodSandboxId:28727162eae183907c195d9fd5223acf57032a1040947cea5dfa9b15cfe6dd47,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726853817570982707,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klvxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6edcd5de-35eb-4e5b-8073-e2a49428b300,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f013b45bfd968d0cf23514647a630ce699e0ed7b9a36138115cce03563ebd0ef,PodSandboxId:fa1a7012d3ccb9a78a9cc3d8d35ee4a4aa883415cdd8fe1eda6bd57d5483df19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913f
c06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726853806327216995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a8f619038c0e1f5f5e421f1961f8a4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48780df679f85764217ee650d3268dfc7988e43dd065577ee1d4a41b3b94f2c,PodSandboxId:fd2c93f89c728325fe986e081ce5e22caf7056060693eac9660634d886e81823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954
ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726853806316679007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc8d6b591a917dcaa84c49b09e7c78a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6b3339abef5b932a0a51290bacbf5ea276d9c2f651978f0fb128032a963ff0,PodSandboxId:fd6f30369eff5dcb42ee83f7ad74d1d0fa801c373f8920a44d3e06edba2e06d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135
b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726853806319373192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97255a7c0e075db6f5e083c1ea277628,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad81a4e34c0219e34fccc748356a548ec1f96c3939761411d1393005f3368bd,PodSandboxId:7b642a7bdd5a2c7525004ed61914d2fc58ca1f4c403f001bb0d898ef73e618c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1a
aea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726853806168616121,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548b4fadf5e5a756eea840e162d03eb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f942b605-a3b1-4f24-8052-e430a9926513 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.873970745Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93fe6ceb-cc83-4a1e-b92e-e86df5b4edae name=/runtime.v1.RuntimeService/Version
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.874065690Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93fe6ceb-cc83-4a1e-b92e-e86df5b4edae name=/runtime.v1.RuntimeService/Version
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.875199115Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b78c9084-7d14-4e5c-9fb1-9941b3b21098 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.876375728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854588876342256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b78c9084-7d14-4e5c-9fb1-9941b3b21098 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.877099595Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d53a22fb-f776-46e7-87eb-53930fdf4dbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.877153614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d53a22fb-f776-46e7-87eb-53930fdf4dbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.882482072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a7564befb2262f7ccc92bee22dd03bfb963dd2b593f58cd61501d7aaa6eeb97,PodSandboxId:f9cf5175fff467a5d0e91ee68f7d277bf862a45e11cb39fffb6ee3614ead9923,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726854581495856579,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-xfq9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e1ea699-e231-467a-a0d1-75143d1036b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19b33ea2ca1e95f3cf6352959a33b4097f8b4afb2a99285d44c77b250f277153,PodSandboxId:baa7b5cea9fa13bed223540120ceec73698806159aa49c33bd266d18b3ec5d0b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726854442139284452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 719ce5c1-7853-4fc9-8fd3-7725aba7ed0c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0b4062645fc2c1c8c82cdce410360489fe6600fb411ea6d60c712a5c12813f,PodSandboxId:5339d36289fab846d220fa9edfa1af3bbc0ffda6cd68845caeceb1aa176d74b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726853904701573245,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-58447,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 6925f58e-54c8-43f8-893e-4ff8a6a84707,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3825f7af126e587a79dfe6d3c64f647f4ccb761c8df843dbb144c54906de5bed,PodSandboxId:18386a0016b7ddae0aedd789d358584cdada2d1f8b42771e39fcc4d6cdf1aacf,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726853883039393792,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-44nll,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3ec98204-6d97-4ac3-a7a9-d53c47f3ab50,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23688c0ebf707d13467884ca61445010331f6a8ec609486dddc1e941625565b6,PodSandboxId:697ab83fd20d470579c5d40bd8e40020a2bc863a8ec4285b53adf170408a43d0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726853882399572406,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-85mv4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a62c0948-1d28-462f-9fa5-104e567a74d7,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f37f4284c136c5a93b735b37ed0979ddbd084e8586efb69fd840601eea6e9b2d,PodSandboxId:0e75080753c376351238a32172857b9850af59e0803ed5f26aa7a13beb06c7fe,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726853848562042456,Labels:map[string]string{io.kubernetes.co
ntainer.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-vcxc2,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 7fc94b7c-a858-4af7-9355-2a81abf00a96,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33bb418967f323981293619c300cd658dfb1efb2617606f6872ec9443138d8d4,PodSandboxId:d86e657235d1aee688b3d4777827dc899fbf0d085c23dbc7f847861897fb0987,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,Creat
edAt:1726853845803445826,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-fj4mf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb63308-9d43-444e-b31b-a5efeef5d323,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c860a4c507c477109d4747b1b074c68a4a31b9aeee2cfc9591edda6f92a49c41,PodSandboxId:5c51ff7a22efee9bfed73a4683dfae61105461d71e774cd60d35b169d58701f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726853822536970769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339440d6-4355-4e26-a436-2edefb4d7b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56bab5bc8aac1da4daf358eadce72c458a49fba19d4c18106120004ede4b716,PodSandboxId:6cb2d547f55f043965d896f833e8e278cd9ea490c81edd68102d7c2c5eb333bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e
956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726853820362351318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dsxdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371b6ad-8f6e-4474-a677-f07c0b4e0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92cf3212ca3856ac30692de35be4bf7391dbf53d3b71d366bbd05e33353b54b5,PodSandboxId:28727162eae183907c195d9fd5223acf57032a1040947cea5dfa9b15cfe6dd47,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726853817570982707,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klvxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6edcd5de-35eb-4e5b-8073-e2a49428b300,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f013b45bfd968d0cf23514647a630ce699e0ed7b9a36138115cce03563ebd0ef,PodSandboxId:fa1a7012d3ccb9a78a9cc3d8d35ee4a4aa883415cdd8fe1eda6bd57d5483df19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913f
c06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726853806327216995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a8f619038c0e1f5f5e421f1961f8a4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48780df679f85764217ee650d3268dfc7988e43dd065577ee1d4a41b3b94f2c,PodSandboxId:fd2c93f89c728325fe986e081ce5e22caf7056060693eac9660634d886e81823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954
ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726853806316679007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc8d6b591a917dcaa84c49b09e7c78a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6b3339abef5b932a0a51290bacbf5ea276d9c2f651978f0fb128032a963ff0,PodSandboxId:fd6f30369eff5dcb42ee83f7ad74d1d0fa801c373f8920a44d3e06edba2e06d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135
b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726853806319373192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97255a7c0e075db6f5e083c1ea277628,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad81a4e34c0219e34fccc748356a548ec1f96c3939761411d1393005f3368bd,PodSandboxId:7b642a7bdd5a2c7525004ed61914d2fc58ca1f4c403f001bb0d898ef73e618c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1a
aea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726853806168616121,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548b4fadf5e5a756eea840e162d03eb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d53a22fb-f776-46e7-87eb-53930fdf4dbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.917424818Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a09db68b-b313-47e4-b1c2-b0d97a30e07b name=/runtime.v1.RuntimeService/Version
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.917536119Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a09db68b-b313-47e4-b1c2-b0d97a30e07b name=/runtime.v1.RuntimeService/Version
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.918833167Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf6da2c7-62cc-4543-8646-1169c82bfbae name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.920210974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854588920179330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf6da2c7-62cc-4543-8646-1169c82bfbae name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.920766257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1199c45a-ae23-40ae-9f37-0e2804adcac5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.920825402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1199c45a-ae23-40ae-9f37-0e2804adcac5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.921193765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a7564befb2262f7ccc92bee22dd03bfb963dd2b593f58cd61501d7aaa6eeb97,PodSandboxId:f9cf5175fff467a5d0e91ee68f7d277bf862a45e11cb39fffb6ee3614ead9923,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726854581495856579,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-xfq9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e1ea699-e231-467a-a0d1-75143d1036b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19b33ea2ca1e95f3cf6352959a33b4097f8b4afb2a99285d44c77b250f277153,PodSandboxId:baa7b5cea9fa13bed223540120ceec73698806159aa49c33bd266d18b3ec5d0b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726854442139284452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 719ce5c1-7853-4fc9-8fd3-7725aba7ed0c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0b4062645fc2c1c8c82cdce410360489fe6600fb411ea6d60c712a5c12813f,PodSandboxId:5339d36289fab846d220fa9edfa1af3bbc0ffda6cd68845caeceb1aa176d74b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726853904701573245,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-58447,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 6925f58e-54c8-43f8-893e-4ff8a6a84707,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3825f7af126e587a79dfe6d3c64f647f4ccb761c8df843dbb144c54906de5bed,PodSandboxId:18386a0016b7ddae0aedd789d358584cdada2d1f8b42771e39fcc4d6cdf1aacf,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726853883039393792,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-44nll,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3ec98204-6d97-4ac3-a7a9-d53c47f3ab50,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23688c0ebf707d13467884ca61445010331f6a8ec609486dddc1e941625565b6,PodSandboxId:697ab83fd20d470579c5d40bd8e40020a2bc863a8ec4285b53adf170408a43d0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726853882399572406,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-85mv4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a62c0948-1d28-462f-9fa5-104e567a74d7,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f37f4284c136c5a93b735b37ed0979ddbd084e8586efb69fd840601eea6e9b2d,PodSandboxId:0e75080753c376351238a32172857b9850af59e0803ed5f26aa7a13beb06c7fe,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726853848562042456,Labels:map[string]string{io.kubernetes.co
ntainer.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-vcxc2,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 7fc94b7c-a858-4af7-9355-2a81abf00a96,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33bb418967f323981293619c300cd658dfb1efb2617606f6872ec9443138d8d4,PodSandboxId:d86e657235d1aee688b3d4777827dc899fbf0d085c23dbc7f847861897fb0987,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,Creat
edAt:1726853845803445826,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-fj4mf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb63308-9d43-444e-b31b-a5efeef5d323,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c860a4c507c477109d4747b1b074c68a4a31b9aeee2cfc9591edda6f92a49c41,PodSandboxId:5c51ff7a22efee9bfed73a4683dfae61105461d71e774cd60d35b169d58701f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726853822536970769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339440d6-4355-4e26-a436-2edefb4d7b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56bab5bc8aac1da4daf358eadce72c458a49fba19d4c18106120004ede4b716,PodSandboxId:6cb2d547f55f043965d896f833e8e278cd9ea490c81edd68102d7c2c5eb333bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e
956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726853820362351318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dsxdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371b6ad-8f6e-4474-a677-f07c0b4e0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92cf3212ca3856ac30692de35be4bf7391dbf53d3b71d366bbd05e33353b54b5,PodSandboxId:28727162eae183907c195d9fd5223acf57032a1040947cea5dfa9b15cfe6dd47,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726853817570982707,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klvxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6edcd5de-35eb-4e5b-8073-e2a49428b300,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f013b45bfd968d0cf23514647a630ce699e0ed7b9a36138115cce03563ebd0ef,PodSandboxId:fa1a7012d3ccb9a78a9cc3d8d35ee4a4aa883415cdd8fe1eda6bd57d5483df19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913f
c06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726853806327216995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a8f619038c0e1f5f5e421f1961f8a4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48780df679f85764217ee650d3268dfc7988e43dd065577ee1d4a41b3b94f2c,PodSandboxId:fd2c93f89c728325fe986e081ce5e22caf7056060693eac9660634d886e81823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954
ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726853806316679007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc8d6b591a917dcaa84c49b09e7c78a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6b3339abef5b932a0a51290bacbf5ea276d9c2f651978f0fb128032a963ff0,PodSandboxId:fd6f30369eff5dcb42ee83f7ad74d1d0fa801c373f8920a44d3e06edba2e06d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135
b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726853806319373192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97255a7c0e075db6f5e083c1ea277628,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad81a4e34c0219e34fccc748356a548ec1f96c3939761411d1393005f3368bd,PodSandboxId:7b642a7bdd5a2c7525004ed61914d2fc58ca1f4c403f001bb0d898ef73e618c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1a
aea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726853806168616121,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548b4fadf5e5a756eea840e162d03eb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1199c45a-ae23-40ae-9f37-0e2804adcac5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.959465099Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=719da491-423e-4e9d-87dd-0c5516b69e51 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.959543095Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=719da491-423e-4e9d-87dd-0c5516b69e51 name=/runtime.v1.RuntimeService/Version
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.960519945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41efdaa6-973f-4751-946d-8c432402905f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.961951743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854588961880217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41efdaa6-973f-4751-946d-8c432402905f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.962422095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9128f9b7-c415-406c-acbb-ff383ca6ed7f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.962476348Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9128f9b7-c415-406c-acbb-ff383ca6ed7f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:49:48 addons-679190 crio[661]: time="2024-09-20 17:49:48.962785848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a7564befb2262f7ccc92bee22dd03bfb963dd2b593f58cd61501d7aaa6eeb97,PodSandboxId:f9cf5175fff467a5d0e91ee68f7d277bf862a45e11cb39fffb6ee3614ead9923,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726854581495856579,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-xfq9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e1ea699-e231-467a-a0d1-75143d1036b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19b33ea2ca1e95f3cf6352959a33b4097f8b4afb2a99285d44c77b250f277153,PodSandboxId:baa7b5cea9fa13bed223540120ceec73698806159aa49c33bd266d18b3ec5d0b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726854442139284452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 719ce5c1-7853-4fc9-8fd3-7725aba7ed0c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0b4062645fc2c1c8c82cdce410360489fe6600fb411ea6d60c712a5c12813f,PodSandboxId:5339d36289fab846d220fa9edfa1af3bbc0ffda6cd68845caeceb1aa176d74b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726853904701573245,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-58447,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 6925f58e-54c8-43f8-893e-4ff8a6a84707,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3825f7af126e587a79dfe6d3c64f647f4ccb761c8df843dbb144c54906de5bed,PodSandboxId:18386a0016b7ddae0aedd789d358584cdada2d1f8b42771e39fcc4d6cdf1aacf,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726853883039393792,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-44nll,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3ec98204-6d97-4ac3-a7a9-d53c47f3ab50,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23688c0ebf707d13467884ca61445010331f6a8ec609486dddc1e941625565b6,PodSandboxId:697ab83fd20d470579c5d40bd8e40020a2bc863a8ec4285b53adf170408a43d0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726853882399572406,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-85mv4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a62c0948-1d28-462f-9fa5-104e567a74d7,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f37f4284c136c5a93b735b37ed0979ddbd084e8586efb69fd840601eea6e9b2d,PodSandboxId:0e75080753c376351238a32172857b9850af59e0803ed5f26aa7a13beb06c7fe,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726853848562042456,Labels:map[string]string{io.kubernetes.co
ntainer.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-vcxc2,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 7fc94b7c-a858-4af7-9355-2a81abf00a96,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33bb418967f323981293619c300cd658dfb1efb2617606f6872ec9443138d8d4,PodSandboxId:d86e657235d1aee688b3d4777827dc899fbf0d085c23dbc7f847861897fb0987,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,Creat
edAt:1726853845803445826,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-fj4mf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb63308-9d43-444e-b31b-a5efeef5d323,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c860a4c507c477109d4747b1b074c68a4a31b9aeee2cfc9591edda6f92a49c41,PodSandboxId:5c51ff7a22efee9bfed73a4683dfae61105461d71e774cd60d35b169d58701f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726853822536970769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339440d6-4355-4e26-a436-2edefb4d7b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56bab5bc8aac1da4daf358eadce72c458a49fba19d4c18106120004ede4b716,PodSandboxId:6cb2d547f55f043965d896f833e8e278cd9ea490c81edd68102d7c2c5eb333bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e
956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726853820362351318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dsxdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371b6ad-8f6e-4474-a677-f07c0b4e0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92cf3212ca3856ac30692de35be4bf7391dbf53d3b71d366bbd05e33353b54b5,PodSandboxId:28727162eae183907c195d9fd5223acf57032a1040947cea5dfa9b15cfe6dd47,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726853817570982707,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klvxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6edcd5de-35eb-4e5b-8073-e2a49428b300,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f013b45bfd968d0cf23514647a630ce699e0ed7b9a36138115cce03563ebd0ef,PodSandboxId:fa1a7012d3ccb9a78a9cc3d8d35ee4a4aa883415cdd8fe1eda6bd57d5483df19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913f
c06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726853806327216995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a8f619038c0e1f5f5e421f1961f8a4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48780df679f85764217ee650d3268dfc7988e43dd065577ee1d4a41b3b94f2c,PodSandboxId:fd2c93f89c728325fe986e081ce5e22caf7056060693eac9660634d886e81823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954
ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726853806316679007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc8d6b591a917dcaa84c49b09e7c78a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6b3339abef5b932a0a51290bacbf5ea276d9c2f651978f0fb128032a963ff0,PodSandboxId:fd6f30369eff5dcb42ee83f7ad74d1d0fa801c373f8920a44d3e06edba2e06d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135
b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726853806319373192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97255a7c0e075db6f5e083c1ea277628,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad81a4e34c0219e34fccc748356a548ec1f96c3939761411d1393005f3368bd,PodSandboxId:7b642a7bdd5a2c7525004ed61914d2fc58ca1f4c403f001bb0d898ef73e618c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1a
aea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726853806168616121,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548b4fadf5e5a756eea840e162d03eb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9128f9b7-c415-406c-acbb-ff383ca6ed7f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0a7564befb226       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   f9cf5175fff46       hello-world-app-55bf9c44b4-xfq9d
	19b33ea2ca1e9       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   baa7b5cea9fa1       nginx
	ec0b4062645fc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   5339d36289fab       gcp-auth-89d5ffd79-58447
	3825f7af126e5       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             11 minutes ago      Exited              patch                     1                   18386a0016b7d       ingress-nginx-admission-patch-44nll
	23688c0ebf707       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              create                    0                   697ab83fd20d4       ingress-nginx-admission-create-85mv4
	f37f4284c136c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             12 minutes ago      Running             local-path-provisioner    0                   0e75080753c37       local-path-provisioner-86d989889c-vcxc2
	33bb418967f32       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   d86e657235d1a       metrics-server-84c5f94fbc-fj4mf
	c860a4c507c47       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   5c51ff7a22efe       storage-provisioner
	e56bab5bc8aac       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             12 minutes ago      Running             coredns                   0                   6cb2d547f55f0       coredns-7c65d6cfc9-dsxdk
	92cf3212ca385       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             12 minutes ago      Running             kube-proxy                0                   28727162eae18       kube-proxy-klvxz
	f013b45bfd968       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   fa1a7012d3ccb       etcd-addons-679190
	6b6b3339abef5       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   fd6f30369eff5       kube-controller-manager-addons-679190
	b48780df679f8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   fd2c93f89c728       kube-scheduler-addons-679190
	1ad81a4e34c02       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   7b642a7bdd5a2       kube-apiserver-addons-679190
	
	
	==> coredns [e56bab5bc8aac1da4daf358eadce72c458a49fba19d4c18106120004ede4b716] <==
	[INFO] 10.244.0.6:43873 - 60318 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000102028s
	[INFO] 10.244.0.6:49634 - 1642 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000165365s
	[INFO] 10.244.0.6:49634 - 14697 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000162336s
	[INFO] 10.244.0.6:51034 - 55160 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094365s
	[INFO] 10.244.0.6:51034 - 10106 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065072s
	[INFO] 10.244.0.6:60315 - 40487 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043683s
	[INFO] 10.244.0.6:60315 - 43321 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084501s
	[INFO] 10.244.0.6:35891 - 53873 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000042787s
	[INFO] 10.244.0.6:35891 - 34419 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000033205s
	[INFO] 10.244.0.6:33447 - 25628 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150214s
	[INFO] 10.244.0.6:33447 - 35042 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000039143s
	[INFO] 10.244.0.6:40512 - 59984 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083424s
	[INFO] 10.244.0.6:40512 - 63570 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038001s
	[INFO] 10.244.0.6:45833 - 63289 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052986s
	[INFO] 10.244.0.6:45833 - 1087 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027214s
	[INFO] 10.244.0.6:44945 - 60461 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000039073s
	[INFO] 10.244.0.6:44945 - 33323 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000036767s
	[INFO] 10.244.0.21:44980 - 55236 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000537238s
	[INFO] 10.244.0.21:57739 - 29484 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00030418s
	[INFO] 10.244.0.21:36936 - 49312 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000192738s
	[INFO] 10.244.0.21:55426 - 11322 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000253353s
	[INFO] 10.244.0.21:57850 - 37730 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000183379s
	[INFO] 10.244.0.21:53881 - 17609 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00016102s
	[INFO] 10.244.0.21:43516 - 53661 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001411255s
	[INFO] 10.244.0.21:49825 - 60779 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000963325s
	
	
	==> describe nodes <==
	Name:               addons-679190
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-679190
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=addons-679190
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_36_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-679190
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:36:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-679190
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:49:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:47:23 +0000   Fri, 20 Sep 2024 17:36:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:47:23 +0000   Fri, 20 Sep 2024 17:36:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:47:23 +0000   Fri, 20 Sep 2024 17:36:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:47:23 +0000   Fri, 20 Sep 2024 17:36:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.158
	  Hostname:    addons-679190
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 be44d0789ac247d0942761612c630a1f
	  System UUID:                be44d078-9ac2-47d0-9427-61612c630a1f
	  Boot ID:                    b2360fab-23fa-467c-99ca-2729b31c70c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-xfq9d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  gcp-auth                    gcp-auth-89d5ffd79-58447                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-dsxdk                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-679190                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-679190               250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-679190      200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-klvxz                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-679190               100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-fj4mf            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-vcxc2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-679190 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-679190 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-679190 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-679190 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-679190 event: Registered Node addons-679190 in Controller
	
	
	==> dmesg <==
	[ +15.750642] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.216234] kauditd_printk_skb: 15 callbacks suppressed
	[  +9.065331] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.958265] kauditd_printk_skb: 4 callbacks suppressed
	[Sep20 17:38] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.888314] kauditd_printk_skb: 42 callbacks suppressed
	[ +10.053683] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.312437] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.622571] kauditd_printk_skb: 54 callbacks suppressed
	[ +26.074732] kauditd_printk_skb: 13 callbacks suppressed
	[Sep20 17:39] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 17:41] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 17:43] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 17:46] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.504877] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.087829] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.196655] kauditd_printk_skb: 59 callbacks suppressed
	[  +7.642956] kauditd_printk_skb: 1 callbacks suppressed
	[Sep20 17:47] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.859221] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.845694] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.862171] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.689570] kauditd_printk_skb: 33 callbacks suppressed
	[Sep20 17:49] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.194903] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [f013b45bfd968d0cf23514647a630ce699e0ed7b9a36138115cce03563ebd0ef] <==
	{"level":"warn","ts":"2024-09-20T17:38:10.652277Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"446.317258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:38:10.652308Z","caller":"traceutil/trace.go:171","msg":"trace[1537739953] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1040; }","duration":"446.348193ms","start":"2024-09-20T17:38:10.205954Z","end":"2024-09-20T17:38:10.652302Z","steps":["trace[1537739953] 'agreement among raft nodes before linearized reading'  (duration: 446.305109ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:38:10.652325Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:38:10.205878Z","time spent":"446.442946ms","remote":"127.0.0.1:49978","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-20T17:38:10.652384Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.30745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-20T17:38:10.652414Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"327.397124ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:38:10.652445Z","caller":"traceutil/trace.go:171","msg":"trace[1075324407] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1040; }","duration":"327.426462ms","start":"2024-09-20T17:38:10.325013Z","end":"2024-09-20T17:38:10.652439Z","steps":["trace[1075324407] 'agreement among raft nodes before linearized reading'  (duration: 327.387735ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:38:10.652430Z","caller":"traceutil/trace.go:171","msg":"trace[133845077] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1040; }","duration":"184.35459ms","start":"2024-09-20T17:38:10.468068Z","end":"2024-09-20T17:38:10.652423Z","steps":["trace[133845077] 'agreement among raft nodes before linearized reading'  (duration: 184.292099ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:38:10.653015Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.844009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:38:10.653104Z","caller":"traceutil/trace.go:171","msg":"trace[209721001] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; response_count:0; response_revision:1040; }","duration":"175.93602ms","start":"2024-09-20T17:38:10.477160Z","end":"2024-09-20T17:38:10.653096Z","steps":["trace[209721001] 'agreement among raft nodes before linearized reading'  (duration: 175.83519ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:38:10.653637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"380.604231ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-09-20T17:38:10.653728Z","caller":"traceutil/trace.go:171","msg":"trace[747134196] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1040; }","duration":"380.696537ms","start":"2024-09-20T17:38:10.273023Z","end":"2024-09-20T17:38:10.653720Z","steps":["trace[747134196] 'agreement among raft nodes before linearized reading'  (duration: 380.53387ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:38:10.653804Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:38:10.272989Z","time spent":"380.80517ms","remote":"127.0.0.1:49862","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":170,"response size":31,"request content":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true "}
	{"level":"info","ts":"2024-09-20T17:39:02.358224Z","caller":"traceutil/trace.go:171","msg":"trace[1180894535] transaction","detail":"{read_only:false; response_revision:1253; number_of_response:1; }","duration":"124.748523ms","start":"2024-09-20T17:39:02.233450Z","end":"2024-09-20T17:39:02.358199Z","steps":["trace[1180894535] 'process raft request'  (duration: 124.628143ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:46:47.805022Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1506}
	{"level":"info","ts":"2024-09-20T17:46:47.839955Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1506,"took":"34.098256ms","hash":2552434771,"current-db-size-bytes":7012352,"current-db-size":"7.0 MB","current-db-size-in-use-bytes":3891200,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2024-09-20T17:46:47.840019Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2552434771,"revision":1506,"compact-revision":-1}
	{"level":"warn","ts":"2024-09-20T17:46:50.466442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.994871ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:46:50.466495Z","caller":"traceutil/trace.go:171","msg":"trace[1650142898] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2057; }","duration":"139.081796ms","start":"2024-09-20T17:46:50.327400Z","end":"2024-09-20T17:46:50.466482Z","steps":["trace[1650142898] 'range keys from in-memory index tree'  (duration: 138.980039ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:46:50.466583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.518533ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:46:50.466595Z","caller":"traceutil/trace.go:171","msg":"trace[2129239253] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2057; }","duration":"128.546109ms","start":"2024-09-20T17:46:50.338045Z","end":"2024-09-20T17:46:50.466591Z","steps":["trace[2129239253] 'range keys from in-memory index tree'  (duration: 128.454845ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:46:52.057090Z","caller":"traceutil/trace.go:171","msg":"trace[2010592287] transaction","detail":"{read_only:false; response_revision:2066; number_of_response:1; }","duration":"142.531828ms","start":"2024-09-20T17:46:51.914533Z","end":"2024-09-20T17:46:52.057065Z","steps":["trace[2010592287] 'process raft request'  (duration: 142.444566ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:46:52.274053Z","caller":"traceutil/trace.go:171","msg":"trace[1207827034] transaction","detail":"{read_only:false; response_revision:2067; number_of_response:1; }","duration":"307.822812ms","start":"2024-09-20T17:46:51.966218Z","end":"2024-09-20T17:46:52.274041Z","steps":["trace[1207827034] 'process raft request'  (duration: 307.159868ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:46:52.276791Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:46:51.966199Z","time spent":"310.498319ms","remote":"127.0.0.1:50070","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-679190\" mod_revision:1964 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-679190\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-679190\" > >"}
	{"level":"info","ts":"2024-09-20T17:47:05.086002Z","caller":"traceutil/trace.go:171","msg":"trace[426642049] transaction","detail":"{read_only:false; response_revision:2125; number_of_response:1; }","duration":"360.035966ms","start":"2024-09-20T17:47:04.725950Z","end":"2024-09-20T17:47:05.085986Z","steps":["trace[426642049] 'process raft request'  (duration: 359.86959ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:47:05.086166Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:47:04.725893Z","time spent":"360.202664ms","remote":"127.0.0.1:49862","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":764,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-zvp8w.17f704774ef9a14d\" mod_revision:1538 > success:<request_put:<key:\"/registry/events/gadget/gadget-zvp8w.17f704774ef9a14d\" value_size:693 lease:8396277679900029684 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-zvp8w.17f704774ef9a14d\" > >"}
	
	
	==> gcp-auth [ec0b4062645fc2c1c8c82cdce410360489fe6600fb411ea6d60c712a5c12813f] <==
	2024/09/20 17:38:30 Ready to write response ...
	2024/09/20 17:38:30 Ready to marshal response ...
	2024/09/20 17:38:30 Ready to write response ...
	2024/09/20 17:46:33 Ready to marshal response ...
	2024/09/20 17:46:33 Ready to write response ...
	2024/09/20 17:46:33 Ready to marshal response ...
	2024/09/20 17:46:33 Ready to write response ...
	2024/09/20 17:46:44 Ready to marshal response ...
	2024/09/20 17:46:44 Ready to write response ...
	2024/09/20 17:46:46 Ready to marshal response ...
	2024/09/20 17:46:46 Ready to write response ...
	2024/09/20 17:46:46 Ready to marshal response ...
	2024/09/20 17:46:46 Ready to write response ...
	2024/09/20 17:46:46 Ready to marshal response ...
	2024/09/20 17:46:46 Ready to write response ...
	2024/09/20 17:46:46 Ready to marshal response ...
	2024/09/20 17:46:46 Ready to write response ...
	2024/09/20 17:46:58 Ready to marshal response ...
	2024/09/20 17:46:58 Ready to write response ...
	2024/09/20 17:47:17 Ready to marshal response ...
	2024/09/20 17:47:17 Ready to write response ...
	2024/09/20 17:47:30 Ready to marshal response ...
	2024/09/20 17:47:30 Ready to write response ...
	2024/09/20 17:49:38 Ready to marshal response ...
	2024/09/20 17:49:38 Ready to write response ...
	
	
	==> kernel <==
	 17:49:49 up 13 min,  0 users,  load average: 0.64, 0.40, 0.34
	Linux addons-679190 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1ad81a4e34c0219e34fccc748356a548ec1f96c3939761411d1393005f3368bd] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 17:38:30.790878       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.117.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.117.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.117.29:443: connect: connection refused" logger="UnhandledError"
	I0920 17:38:30.830638       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0920 17:46:46.684183       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.95.13"}
	I0920 17:47:12.280076       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0920 17:47:12.410157       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	W0920 17:47:13.322883       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 17:47:17.844287       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 17:47:18.049415       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.232.246"}
	I0920 17:47:46.449796       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 17:47:46.449837       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 17:47:46.476080       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 17:47:46.476629       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 17:47:46.493047       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 17:47:46.493097       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 17:47:46.524567       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 17:47:46.524602       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 17:47:46.579848       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 17:47:46.579981       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 17:47:47.491165       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 17:47:47.581041       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0920 17:47:47.634102       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0920 17:49:38.828888       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.91.183"}
	
	
	==> kube-controller-manager [6b6b3339abef5b932a0a51290bacbf5ea276d9c2f651978f0fb128032a963ff0] <==
	W0920 17:48:24.879732       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:48:24.879849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:48:28.850536       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:48:28.850651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:48:29.438597       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:48:29.438651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:49:00.403878       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:49:00.404071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:49:03.967813       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:49:03.968016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:49:12.368828       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:49:12.368885       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:49:14.158099       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:49:14.158196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 17:49:38.637605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="46.460746ms"
	I0920 17:49:38.647447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="8.77486ms"
	I0920 17:49:38.648104       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="79.995µs"
	I0920 17:49:38.663303       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.775µs"
	I0920 17:49:40.956161       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0920 17:49:40.964809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="7.856µs"
	I0920 17:49:40.966243       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0920 17:49:42.335254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.748734ms"
	I0920 17:49:42.336520       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="55.883µs"
	W0920 17:49:44.902349       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:49:44.902499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [92cf3212ca3856ac30692de35be4bf7391dbf53d3b71d366bbd05e33353b54b5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:36:58.351748       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:36:58.367671       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.158"]
	E0920 17:36:58.367735       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:36:58.429527       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:36:58.429561       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:36:58.429586       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:36:58.435086       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:36:58.435354       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:36:58.435365       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:36:58.438303       1 config.go:199] "Starting service config controller"
	I0920 17:36:58.438316       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:36:58.438348       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:36:58.438352       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:36:58.441613       1 config.go:328] "Starting node config controller"
	I0920 17:36:58.441622       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:36:58.539182       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 17:36:58.539245       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:36:58.541947       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b48780df679f85764217ee650d3268dfc7988e43dd065577ee1d4a41b3b94f2c] <==
	W0920 17:36:49.950209       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:36:49.950280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:49.987691       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 17:36:49.987738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.025498       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 17:36:50.025544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.093076       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 17:36:50.093121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.124089       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:36:50.124138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.155077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 17:36:50.155234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.155746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 17:36:50.155937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.223176       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:36:50.223220       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.247365       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:36:50.247426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.317547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 17:36:50.317600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.383564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:36:50.383615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.534598       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 17:36:50.534719       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 17:36:53.562785       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 17:49:38 addons-679190 kubelet[1208]: I0920 17:49:38.715666    1208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j8g8\" (UniqueName: \"kubernetes.io/projected/0e1ea699-e231-467a-a0d1-75143d1036b7-kube-api-access-5j8g8\") pod \"hello-world-app-55bf9c44b4-xfq9d\" (UID: \"0e1ea699-e231-467a-a0d1-75143d1036b7\") " pod="default/hello-world-app-55bf9c44b4-xfq9d"
	Sep 20 17:49:38 addons-679190 kubelet[1208]: I0920 17:49:38.715705    1208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0e1ea699-e231-467a-a0d1-75143d1036b7-gcp-creds\") pod \"hello-world-app-55bf9c44b4-xfq9d\" (UID: \"0e1ea699-e231-467a-a0d1-75143d1036b7\") " pod="default/hello-world-app-55bf9c44b4-xfq9d"
	Sep 20 17:49:39 addons-679190 kubelet[1208]: I0920 17:49:39.823965    1208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rjcw\" (UniqueName: \"kubernetes.io/projected/1a3b7852-a919-4f95-9e5c-20ead0de76ad-kube-api-access-8rjcw\") pod \"1a3b7852-a919-4f95-9e5c-20ead0de76ad\" (UID: \"1a3b7852-a919-4f95-9e5c-20ead0de76ad\") "
	Sep 20 17:49:39 addons-679190 kubelet[1208]: I0920 17:49:39.826831    1208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a3b7852-a919-4f95-9e5c-20ead0de76ad-kube-api-access-8rjcw" (OuterVolumeSpecName: "kube-api-access-8rjcw") pod "1a3b7852-a919-4f95-9e5c-20ead0de76ad" (UID: "1a3b7852-a919-4f95-9e5c-20ead0de76ad"). InnerVolumeSpecName "kube-api-access-8rjcw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 17:49:39 addons-679190 kubelet[1208]: I0920 17:49:39.924338    1208 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8rjcw\" (UniqueName: \"kubernetes.io/projected/1a3b7852-a919-4f95-9e5c-20ead0de76ad-kube-api-access-8rjcw\") on node \"addons-679190\" DevicePath \"\""
	Sep 20 17:49:40 addons-679190 kubelet[1208]: I0920 17:49:40.294190    1208 scope.go:117] "RemoveContainer" containerID="9bf9d5c0ec974954d96f24bfd70172a55a8168c40434b059b8bb1c9a2f044392"
	Sep 20 17:49:40 addons-679190 kubelet[1208]: I0920 17:49:40.320472    1208 scope.go:117] "RemoveContainer" containerID="9bf9d5c0ec974954d96f24bfd70172a55a8168c40434b059b8bb1c9a2f044392"
	Sep 20 17:49:40 addons-679190 kubelet[1208]: E0920 17:49:40.325725    1208 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bf9d5c0ec974954d96f24bfd70172a55a8168c40434b059b8bb1c9a2f044392\": container with ID starting with 9bf9d5c0ec974954d96f24bfd70172a55a8168c40434b059b8bb1c9a2f044392 not found: ID does not exist" containerID="9bf9d5c0ec974954d96f24bfd70172a55a8168c40434b059b8bb1c9a2f044392"
	Sep 20 17:49:40 addons-679190 kubelet[1208]: I0920 17:49:40.325796    1208 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf9d5c0ec974954d96f24bfd70172a55a8168c40434b059b8bb1c9a2f044392"} err="failed to get container status \"9bf9d5c0ec974954d96f24bfd70172a55a8168c40434b059b8bb1c9a2f044392\": rpc error: code = NotFound desc = could not find container \"9bf9d5c0ec974954d96f24bfd70172a55a8168c40434b059b8bb1c9a2f044392\": container with ID starting with 9bf9d5c0ec974954d96f24bfd70172a55a8168c40434b059b8bb1c9a2f044392 not found: ID does not exist"
	Sep 20 17:49:41 addons-679190 kubelet[1208]: I0920 17:49:41.726740    1208 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a3b7852-a919-4f95-9e5c-20ead0de76ad" path="/var/lib/kubelet/pods/1a3b7852-a919-4f95-9e5c-20ead0de76ad/volumes"
	Sep 20 17:49:41 addons-679190 kubelet[1208]: I0920 17:49:41.727542    1208 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ec98204-6d97-4ac3-a7a9-d53c47f3ab50" path="/var/lib/kubelet/pods/3ec98204-6d97-4ac3-a7a9-d53c47f3ab50/volumes"
	Sep 20 17:49:41 addons-679190 kubelet[1208]: I0920 17:49:41.728086    1208 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a62c0948-1d28-462f-9fa5-104e567a74d7" path="/var/lib/kubelet/pods/a62c0948-1d28-462f-9fa5-104e567a74d7/volumes"
	Sep 20 17:49:42 addons-679190 kubelet[1208]: E0920 17:49:42.226018    1208 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854582225454130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:49:42 addons-679190 kubelet[1208]: E0920 17:49:42.226053    1208 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854582225454130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:49:44 addons-679190 kubelet[1208]: I0920 17:49:44.257135    1208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/67a3fcb3-ce83-4bf8-a804-16a31e6d5da4-webhook-cert\") pod \"67a3fcb3-ce83-4bf8-a804-16a31e6d5da4\" (UID: \"67a3fcb3-ce83-4bf8-a804-16a31e6d5da4\") "
	Sep 20 17:49:44 addons-679190 kubelet[1208]: I0920 17:49:44.257180    1208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rsdk\" (UniqueName: \"kubernetes.io/projected/67a3fcb3-ce83-4bf8-a804-16a31e6d5da4-kube-api-access-2rsdk\") pod \"67a3fcb3-ce83-4bf8-a804-16a31e6d5da4\" (UID: \"67a3fcb3-ce83-4bf8-a804-16a31e6d5da4\") "
	Sep 20 17:49:44 addons-679190 kubelet[1208]: I0920 17:49:44.258988    1208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a3fcb3-ce83-4bf8-a804-16a31e6d5da4-kube-api-access-2rsdk" (OuterVolumeSpecName: "kube-api-access-2rsdk") pod "67a3fcb3-ce83-4bf8-a804-16a31e6d5da4" (UID: "67a3fcb3-ce83-4bf8-a804-16a31e6d5da4"). InnerVolumeSpecName "kube-api-access-2rsdk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 17:49:44 addons-679190 kubelet[1208]: I0920 17:49:44.260146    1208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a3fcb3-ce83-4bf8-a804-16a31e6d5da4-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "67a3fcb3-ce83-4bf8-a804-16a31e6d5da4" (UID: "67a3fcb3-ce83-4bf8-a804-16a31e6d5da4"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 20 17:49:44 addons-679190 kubelet[1208]: I0920 17:49:44.320078    1208 scope.go:117] "RemoveContainer" containerID="2eaa551f6ba5a1855d1ed04c948c80937d8b2f668c643d6274f92360ad804f83"
	Sep 20 17:49:44 addons-679190 kubelet[1208]: I0920 17:49:44.352700    1208 scope.go:117] "RemoveContainer" containerID="2eaa551f6ba5a1855d1ed04c948c80937d8b2f668c643d6274f92360ad804f83"
	Sep 20 17:49:44 addons-679190 kubelet[1208]: E0920 17:49:44.353329    1208 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eaa551f6ba5a1855d1ed04c948c80937d8b2f668c643d6274f92360ad804f83\": container with ID starting with 2eaa551f6ba5a1855d1ed04c948c80937d8b2f668c643d6274f92360ad804f83 not found: ID does not exist" containerID="2eaa551f6ba5a1855d1ed04c948c80937d8b2f668c643d6274f92360ad804f83"
	Sep 20 17:49:44 addons-679190 kubelet[1208]: I0920 17:49:44.353368    1208 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eaa551f6ba5a1855d1ed04c948c80937d8b2f668c643d6274f92360ad804f83"} err="failed to get container status \"2eaa551f6ba5a1855d1ed04c948c80937d8b2f668c643d6274f92360ad804f83\": rpc error: code = NotFound desc = could not find container \"2eaa551f6ba5a1855d1ed04c948c80937d8b2f668c643d6274f92360ad804f83\": container with ID starting with 2eaa551f6ba5a1855d1ed04c948c80937d8b2f668c643d6274f92360ad804f83 not found: ID does not exist"
	Sep 20 17:49:44 addons-679190 kubelet[1208]: I0920 17:49:44.357669    1208 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/67a3fcb3-ce83-4bf8-a804-16a31e6d5da4-webhook-cert\") on node \"addons-679190\" DevicePath \"\""
	Sep 20 17:49:44 addons-679190 kubelet[1208]: I0920 17:49:44.357706    1208 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2rsdk\" (UniqueName: \"kubernetes.io/projected/67a3fcb3-ce83-4bf8-a804-16a31e6d5da4-kube-api-access-2rsdk\") on node \"addons-679190\" DevicePath \"\""
	Sep 20 17:49:45 addons-679190 kubelet[1208]: I0920 17:49:45.727859    1208 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67a3fcb3-ce83-4bf8-a804-16a31e6d5da4" path="/var/lib/kubelet/pods/67a3fcb3-ce83-4bf8-a804-16a31e6d5da4/volumes"
	
	
	==> storage-provisioner [c860a4c507c477109d4747b1b074c68a4a31b9aeee2cfc9591edda6f92a49c41] <==
	I0920 17:37:04.229131       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 17:37:04.316870       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 17:37:04.316984       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 17:37:04.369383       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 17:37:04.369574       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-679190_755bbb3d-4dd9-4398-8fc4-ca84bbdfa577!
	I0920 17:37:04.369640       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1220de9-2330-4b06-bc0f-6bb70dd8d11a", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-679190_755bbb3d-4dd9-4398-8fc4-ca84bbdfa577 became leader
	I0920 17:37:04.470051       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-679190_755bbb3d-4dd9-4398-8fc4-ca84bbdfa577!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-679190 -n addons-679190
helpers_test.go:261: (dbg) Run:  kubectl --context addons-679190 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-679190 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-679190 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-679190/192.168.39.158
	Start Time:       Fri, 20 Sep 2024 17:38:30 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n77bn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n77bn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-679190
	  Normal   Pulling    9m47s (x4 over 11m)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     9m47s (x4 over 11m)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     9m47s (x4 over 11m)  kubelet            Error: ErrImagePull
	  Warning  Failed     9m34s (x6 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    75s (x42 over 11m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.56s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (350.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.377393ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-fj4mf" [adb63308-9d43-444e-b31b-a5efeef5d323] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008123508s
addons_test.go:413: (dbg) Run:  kubectl --context addons-679190 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-679190 top pods -n kube-system: exit status 1 (108.017588ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dsxdk, age: 9m56.641581875s

                                                
                                                
** /stderr **
I0920 17:46:52.643691  244849 retry.go:31] will retry after 2.5602141s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-679190 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-679190 top pods -n kube-system: exit status 1 (66.830242ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dsxdk, age: 9m59.269029718s

                                                
                                                
** /stderr **
I0920 17:46:55.270996  244849 retry.go:31] will retry after 5.661808988s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-679190 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-679190 top pods -n kube-system: exit status 1 (73.347029ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dsxdk, age: 10m5.004719332s

                                                
                                                
** /stderr **
I0920 17:47:01.006907  244849 retry.go:31] will retry after 10.042950609s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-679190 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-679190 top pods -n kube-system: exit status 1 (64.952353ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dsxdk, age: 10m15.113124426s

                                                
                                                
** /stderr **
I0920 17:47:11.115471  244849 retry.go:31] will retry after 7.66682947s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-679190 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-679190 top pods -n kube-system: exit status 1 (69.134602ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dsxdk, age: 10m22.849406893s

                                                
                                                
** /stderr **
I0920 17:47:18.851836  244849 retry.go:31] will retry after 19.769177067s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-679190 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-679190 top pods -n kube-system: exit status 1 (149.878967ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dsxdk, age: 10m42.769962725s

                                                
                                                
** /stderr **
I0920 17:47:38.772054  244849 retry.go:31] will retry after 17.473141545s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-679190 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-679190 top pods -n kube-system: exit status 1 (62.495124ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dsxdk, age: 11m0.311050332s

                                                
                                                
** /stderr **
I0920 17:47:56.313279  244849 retry.go:31] will retry after 21.239034878s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-679190 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-679190 top pods -n kube-system: exit status 1 (65.503685ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dsxdk, age: 11m21.616264191s

                                                
                                                
** /stderr **
I0920 17:48:17.618665  244849 retry.go:31] will retry after 46.377607634s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-679190 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-679190 top pods -n kube-system: exit status 1 (65.417363ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dsxdk, age: 12m8.062055422s

                                                
                                                
** /stderr **
I0920 17:49:04.064786  244849 retry.go:31] will retry after 1m19.665753116s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-679190 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-679190 top pods -n kube-system: exit status 1 (66.280231ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dsxdk, age: 13m27.797410635s

                                                
                                                
** /stderr **
I0920 17:50:23.799887  244849 retry.go:31] will retry after 43.040867276s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-679190 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-679190 top pods -n kube-system: exit status 1 (67.016457ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dsxdk, age: 14m10.910937432s

                                                
                                                
** /stderr **
I0920 17:51:06.913420  244849 retry.go:31] will retry after 1m28.340961636s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-679190 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-679190 top pods -n kube-system: exit status 1 (68.212452ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dsxdk, age: 15m39.32438512s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-679190 -n addons-679190
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-679190 logs -n 25: (1.34051375s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-799771                                                                     | download-only-799771 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| delete  | -p download-only-591101                                                                     | download-only-591101 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| delete  | -p download-only-799771                                                                     | download-only-799771 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-242308 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC |                     |
	|         | binary-mirror-242308                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46511                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-242308                                                                     | binary-mirror-242308 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| addons  | disable dashboard -p                                                                        | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC |                     |
	|         | addons-679190                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC |                     |
	|         | addons-679190                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-679190 --wait=true                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:38 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | -p addons-679190                                                                            |                      |         |         |                     |                     |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | -p addons-679190                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-679190 ssh cat                                                                       | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | /opt/local-path-provisioner/pvc-4a7cfa23-ab8c-4f3b-b69f-a32cbb6790dc_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | addons-679190                                                                               |                      |         |         |                     |                     |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:46 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:46 UTC | 20 Sep 24 17:47 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC | 20 Sep 24 17:47 UTC |
	|         | addons-679190                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-679190 ssh curl -s                                                                   | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-679190 addons                                                                        | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC | 20 Sep 24 17:47 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-679190 ip                                                                            | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC | 20 Sep 24 17:47 UTC |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC | 20 Sep 24 17:47 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-679190 addons                                                                        | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC | 20 Sep 24 17:47 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-679190 ip                                                                            | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:49 UTC | 20 Sep 24 17:49 UTC |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:49 UTC | 20 Sep 24 17:49 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-679190 addons disable                                                                | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:49 UTC | 20 Sep 24 17:49 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-679190 addons                                                                        | addons-679190        | jenkins | v1.34.0 | 20 Sep 24 17:52 UTC | 20 Sep 24 17:52 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:36:14
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:36:14.402655  245557 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:36:14.402933  245557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:36:14.402943  245557 out.go:358] Setting ErrFile to fd 2...
	I0920 17:36:14.402948  245557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:36:14.403159  245557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 17:36:14.403805  245557 out.go:352] Setting JSON to false
	I0920 17:36:14.404822  245557 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4717,"bootTime":1726849057,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:36:14.404931  245557 start.go:139] virtualization: kvm guest
	I0920 17:36:14.407275  245557 out.go:177] * [addons-679190] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:36:14.408502  245557 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 17:36:14.408541  245557 notify.go:220] Checking for updates...
	I0920 17:36:14.411057  245557 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:36:14.412803  245557 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:36:14.414198  245557 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:36:14.415792  245557 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:36:14.417282  245557 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:36:14.418952  245557 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:36:14.453245  245557 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 17:36:14.454782  245557 start.go:297] selected driver: kvm2
	I0920 17:36:14.454802  245557 start.go:901] validating driver "kvm2" against <nil>
	I0920 17:36:14.454819  245557 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:36:14.455638  245557 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:36:14.455744  245557 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:36:14.473296  245557 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:36:14.473373  245557 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:36:14.473597  245557 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:36:14.473630  245557 cni.go:84] Creating CNI manager for ""
	I0920 17:36:14.473686  245557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 17:36:14.473698  245557 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 17:36:14.473755  245557 start.go:340] cluster config:
	{Name:addons-679190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-679190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:36:14.473865  245557 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:36:14.475815  245557 out.go:177] * Starting "addons-679190" primary control-plane node in "addons-679190" cluster
	I0920 17:36:14.477065  245557 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:36:14.477119  245557 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 17:36:14.477134  245557 cache.go:56] Caching tarball of preloaded images
	I0920 17:36:14.477218  245557 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:36:14.477230  245557 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:36:14.477537  245557 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/config.json ...
	I0920 17:36:14.477565  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/config.json: {Name:mk111f108190ba76ef8034134b6af7b7147db588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:14.477758  245557 start.go:360] acquireMachinesLock for addons-679190: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:36:14.477823  245557 start.go:364] duration metric: took 47.775µs to acquireMachinesLock for "addons-679190"
	I0920 17:36:14.477861  245557 start.go:93] Provisioning new machine with config: &{Name:addons-679190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-679190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:36:14.477966  245557 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 17:36:14.479569  245557 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 17:36:14.479725  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:14.479766  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:14.495292  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36423
	I0920 17:36:14.495863  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:14.496485  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:14.496509  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:14.496865  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:14.497041  245557 main.go:141] libmachine: (addons-679190) Calling .GetMachineName
	I0920 17:36:14.497187  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:14.497338  245557 start.go:159] libmachine.API.Create for "addons-679190" (driver="kvm2")
	I0920 17:36:14.497372  245557 client.go:168] LocalClient.Create starting
	I0920 17:36:14.497411  245557 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 17:36:14.582390  245557 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 17:36:14.704786  245557 main.go:141] libmachine: Running pre-create checks...
	I0920 17:36:14.704815  245557 main.go:141] libmachine: (addons-679190) Calling .PreCreateCheck
	I0920 17:36:14.705320  245557 main.go:141] libmachine: (addons-679190) Calling .GetConfigRaw
	I0920 17:36:14.705938  245557 main.go:141] libmachine: Creating machine...
	I0920 17:36:14.705960  245557 main.go:141] libmachine: (addons-679190) Calling .Create
	I0920 17:36:14.706168  245557 main.go:141] libmachine: (addons-679190) Creating KVM machine...
	I0920 17:36:14.707572  245557 main.go:141] libmachine: (addons-679190) DBG | found existing default KVM network
	I0920 17:36:14.708407  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:14.708217  245579 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00020b330}
	I0920 17:36:14.708450  245557 main.go:141] libmachine: (addons-679190) DBG | created network xml: 
	I0920 17:36:14.708468  245557 main.go:141] libmachine: (addons-679190) DBG | <network>
	I0920 17:36:14.708486  245557 main.go:141] libmachine: (addons-679190) DBG |   <name>mk-addons-679190</name>
	I0920 17:36:14.708539  245557 main.go:141] libmachine: (addons-679190) DBG |   <dns enable='no'/>
	I0920 17:36:14.708569  245557 main.go:141] libmachine: (addons-679190) DBG |   
	I0920 17:36:14.708581  245557 main.go:141] libmachine: (addons-679190) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 17:36:14.708596  245557 main.go:141] libmachine: (addons-679190) DBG |     <dhcp>
	I0920 17:36:14.708609  245557 main.go:141] libmachine: (addons-679190) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 17:36:14.708620  245557 main.go:141] libmachine: (addons-679190) DBG |     </dhcp>
	I0920 17:36:14.708631  245557 main.go:141] libmachine: (addons-679190) DBG |   </ip>
	I0920 17:36:14.708640  245557 main.go:141] libmachine: (addons-679190) DBG |   
	I0920 17:36:14.708651  245557 main.go:141] libmachine: (addons-679190) DBG | </network>
	I0920 17:36:14.708660  245557 main.go:141] libmachine: (addons-679190) DBG | 
	I0920 17:36:14.714317  245557 main.go:141] libmachine: (addons-679190) DBG | trying to create private KVM network mk-addons-679190 192.168.39.0/24...
	I0920 17:36:14.786920  245557 main.go:141] libmachine: (addons-679190) DBG | private KVM network mk-addons-679190 192.168.39.0/24 created
	I0920 17:36:14.786967  245557 main.go:141] libmachine: (addons-679190) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190 ...
	I0920 17:36:14.786983  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:14.786868  245579 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:36:14.787006  245557 main.go:141] libmachine: (addons-679190) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 17:36:14.787026  245557 main.go:141] libmachine: (addons-679190) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 17:36:15.067231  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:15.067014  245579 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa...
	I0920 17:36:15.314104  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:15.313891  245579 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/addons-679190.rawdisk...
	I0920 17:36:15.314159  245557 main.go:141] libmachine: (addons-679190) DBG | Writing magic tar header
	I0920 17:36:15.314176  245557 main.go:141] libmachine: (addons-679190) DBG | Writing SSH key tar header
	I0920 17:36:15.314187  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:15.314075  245579 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190 ...
	I0920 17:36:15.314203  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190
	I0920 17:36:15.314278  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190 (perms=drwx------)
	I0920 17:36:15.314312  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:36:15.314323  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 17:36:15.314336  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:36:15.314343  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 17:36:15.314349  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 17:36:15.314357  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 17:36:15.314367  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:36:15.314379  245557 main.go:141] libmachine: (addons-679190) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:36:15.314391  245557 main.go:141] libmachine: (addons-679190) Creating domain...
	I0920 17:36:15.314402  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:36:15.314413  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:36:15.314423  245557 main.go:141] libmachine: (addons-679190) DBG | Checking permissions on dir: /home
	I0920 17:36:15.314435  245557 main.go:141] libmachine: (addons-679190) DBG | Skipping /home - not owner
	I0920 17:36:15.315774  245557 main.go:141] libmachine: (addons-679190) define libvirt domain using xml: 
	I0920 17:36:15.315815  245557 main.go:141] libmachine: (addons-679190) <domain type='kvm'>
	I0920 17:36:15.315826  245557 main.go:141] libmachine: (addons-679190)   <name>addons-679190</name>
	I0920 17:36:15.315834  245557 main.go:141] libmachine: (addons-679190)   <memory unit='MiB'>4000</memory>
	I0920 17:36:15.315864  245557 main.go:141] libmachine: (addons-679190)   <vcpu>2</vcpu>
	I0920 17:36:15.315882  245557 main.go:141] libmachine: (addons-679190)   <features>
	I0920 17:36:15.315888  245557 main.go:141] libmachine: (addons-679190)     <acpi/>
	I0920 17:36:15.315893  245557 main.go:141] libmachine: (addons-679190)     <apic/>
	I0920 17:36:15.315898  245557 main.go:141] libmachine: (addons-679190)     <pae/>
	I0920 17:36:15.315903  245557 main.go:141] libmachine: (addons-679190)     
	I0920 17:36:15.315908  245557 main.go:141] libmachine: (addons-679190)   </features>
	I0920 17:36:15.315915  245557 main.go:141] libmachine: (addons-679190)   <cpu mode='host-passthrough'>
	I0920 17:36:15.315933  245557 main.go:141] libmachine: (addons-679190)   
	I0920 17:36:15.315947  245557 main.go:141] libmachine: (addons-679190)   </cpu>
	I0920 17:36:15.315956  245557 main.go:141] libmachine: (addons-679190)   <os>
	I0920 17:36:15.315968  245557 main.go:141] libmachine: (addons-679190)     <type>hvm</type>
	I0920 17:36:15.315977  245557 main.go:141] libmachine: (addons-679190)     <boot dev='cdrom'/>
	I0920 17:36:15.315988  245557 main.go:141] libmachine: (addons-679190)     <boot dev='hd'/>
	I0920 17:36:15.315997  245557 main.go:141] libmachine: (addons-679190)     <bootmenu enable='no'/>
	I0920 17:36:15.316006  245557 main.go:141] libmachine: (addons-679190)   </os>
	I0920 17:36:15.316011  245557 main.go:141] libmachine: (addons-679190)   <devices>
	I0920 17:36:15.316017  245557 main.go:141] libmachine: (addons-679190)     <disk type='file' device='cdrom'>
	I0920 17:36:15.316028  245557 main.go:141] libmachine: (addons-679190)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/boot2docker.iso'/>
	I0920 17:36:15.316043  245557 main.go:141] libmachine: (addons-679190)       <target dev='hdc' bus='scsi'/>
	I0920 17:36:15.316055  245557 main.go:141] libmachine: (addons-679190)       <readonly/>
	I0920 17:36:15.316064  245557 main.go:141] libmachine: (addons-679190)     </disk>
	I0920 17:36:15.316073  245557 main.go:141] libmachine: (addons-679190)     <disk type='file' device='disk'>
	I0920 17:36:15.316112  245557 main.go:141] libmachine: (addons-679190)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:36:15.316140  245557 main.go:141] libmachine: (addons-679190)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/addons-679190.rawdisk'/>
	I0920 17:36:15.316151  245557 main.go:141] libmachine: (addons-679190)       <target dev='hda' bus='virtio'/>
	I0920 17:36:15.316156  245557 main.go:141] libmachine: (addons-679190)     </disk>
	I0920 17:36:15.316164  245557 main.go:141] libmachine: (addons-679190)     <interface type='network'>
	I0920 17:36:15.316171  245557 main.go:141] libmachine: (addons-679190)       <source network='mk-addons-679190'/>
	I0920 17:36:15.316176  245557 main.go:141] libmachine: (addons-679190)       <model type='virtio'/>
	I0920 17:36:15.316183  245557 main.go:141] libmachine: (addons-679190)     </interface>
	I0920 17:36:15.316188  245557 main.go:141] libmachine: (addons-679190)     <interface type='network'>
	I0920 17:36:15.316195  245557 main.go:141] libmachine: (addons-679190)       <source network='default'/>
	I0920 17:36:15.316200  245557 main.go:141] libmachine: (addons-679190)       <model type='virtio'/>
	I0920 17:36:15.316206  245557 main.go:141] libmachine: (addons-679190)     </interface>
	I0920 17:36:15.316219  245557 main.go:141] libmachine: (addons-679190)     <serial type='pty'>
	I0920 17:36:15.316228  245557 main.go:141] libmachine: (addons-679190)       <target port='0'/>
	I0920 17:36:15.316240  245557 main.go:141] libmachine: (addons-679190)     </serial>
	I0920 17:36:15.316250  245557 main.go:141] libmachine: (addons-679190)     <console type='pty'>
	I0920 17:36:15.316263  245557 main.go:141] libmachine: (addons-679190)       <target type='serial' port='0'/>
	I0920 17:36:15.316272  245557 main.go:141] libmachine: (addons-679190)     </console>
	I0920 17:36:15.316283  245557 main.go:141] libmachine: (addons-679190)     <rng model='virtio'>
	I0920 17:36:15.316295  245557 main.go:141] libmachine: (addons-679190)       <backend model='random'>/dev/random</backend>
	I0920 17:36:15.316305  245557 main.go:141] libmachine: (addons-679190)     </rng>
	I0920 17:36:15.316313  245557 main.go:141] libmachine: (addons-679190)     
	I0920 17:36:15.316348  245557 main.go:141] libmachine: (addons-679190)     
	I0920 17:36:15.316373  245557 main.go:141] libmachine: (addons-679190)   </devices>
	I0920 17:36:15.316383  245557 main.go:141] libmachine: (addons-679190) </domain>
	I0920 17:36:15.316393  245557 main.go:141] libmachine: (addons-679190) 
	I0920 17:36:15.320892  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:3c:0d:15 in network default
	I0920 17:36:15.321583  245557 main.go:141] libmachine: (addons-679190) Ensuring networks are active...
	I0920 17:36:15.321600  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:15.322455  245557 main.go:141] libmachine: (addons-679190) Ensuring network default is active
	I0920 17:36:15.322876  245557 main.go:141] libmachine: (addons-679190) Ensuring network mk-addons-679190 is active
	I0920 17:36:15.323465  245557 main.go:141] libmachine: (addons-679190) Getting domain xml...
	I0920 17:36:15.324200  245557 main.go:141] libmachine: (addons-679190) Creating domain...
	I0920 17:36:16.552011  245557 main.go:141] libmachine: (addons-679190) Waiting to get IP...
	I0920 17:36:16.552931  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:16.553409  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:16.553467  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:16.553404  245579 retry.go:31] will retry after 233.074861ms: waiting for machine to come up
	I0920 17:36:16.788019  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:16.788566  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:16.788598  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:16.788486  245579 retry.go:31] will retry after 254.61991ms: waiting for machine to come up
	I0920 17:36:17.044950  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:17.045459  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:17.045481  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:17.045403  245579 retry.go:31] will retry after 378.47406ms: waiting for machine to come up
	I0920 17:36:17.424996  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:17.425465  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:17.425530  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:17.425456  245579 retry.go:31] will retry after 555.098735ms: waiting for machine to come up
	I0920 17:36:17.982414  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:17.982850  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:17.982872  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:17.982792  245579 retry.go:31] will retry after 674.733173ms: waiting for machine to come up
	I0920 17:36:18.658928  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:18.659386  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:18.659419  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:18.659377  245579 retry.go:31] will retry after 611.03774ms: waiting for machine to come up
	I0920 17:36:19.272181  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:19.272670  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:19.272694  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:19.272607  245579 retry.go:31] will retry after 945.481389ms: waiting for machine to come up
	I0920 17:36:20.219424  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:20.219953  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:20.219984  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:20.219887  245579 retry.go:31] will retry after 1.421505917s: waiting for machine to come up
	I0920 17:36:21.643502  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:21.643959  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:21.643984  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:21.643882  245579 retry.go:31] will retry after 1.172513378s: waiting for machine to come up
	I0920 17:36:22.818244  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:22.818633  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:22.818660  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:22.818591  245579 retry.go:31] will retry after 1.867074328s: waiting for machine to come up
	I0920 17:36:24.687694  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:24.688210  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:24.688237  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:24.688136  245579 retry.go:31] will retry after 2.905548451s: waiting for machine to come up
	I0920 17:36:27.597342  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:27.597969  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:27.597998  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:27.597896  245579 retry.go:31] will retry after 3.379184262s: waiting for machine to come up
	I0920 17:36:30.979086  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:30.979495  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find current IP address of domain addons-679190 in network mk-addons-679190
	I0920 17:36:30.979519  245557 main.go:141] libmachine: (addons-679190) DBG | I0920 17:36:30.979448  245579 retry.go:31] will retry after 3.110787974s: waiting for machine to come up
	I0920 17:36:34.093921  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.094329  245557 main.go:141] libmachine: (addons-679190) Found IP for machine: 192.168.39.158
	I0920 17:36:34.094349  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has current primary IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.094357  245557 main.go:141] libmachine: (addons-679190) Reserving static IP address...
	I0920 17:36:34.094749  245557 main.go:141] libmachine: (addons-679190) DBG | unable to find host DHCP lease matching {name: "addons-679190", mac: "52:54:00:40:27:d9", ip: "192.168.39.158"} in network mk-addons-679190
	I0920 17:36:34.175576  245557 main.go:141] libmachine: (addons-679190) Reserved static IP address: 192.168.39.158
	I0920 17:36:34.175604  245557 main.go:141] libmachine: (addons-679190) DBG | Getting to WaitForSSH function...
	I0920 17:36:34.175611  245557 main.go:141] libmachine: (addons-679190) Waiting for SSH to be available...
	I0920 17:36:34.178818  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.179284  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.179318  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.179535  245557 main.go:141] libmachine: (addons-679190) DBG | Using SSH client type: external
	I0920 17:36:34.179710  245557 main.go:141] libmachine: (addons-679190) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa (-rw-------)
	I0920 17:36:34.179795  245557 main.go:141] libmachine: (addons-679190) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:36:34.179828  245557 main.go:141] libmachine: (addons-679190) DBG | About to run SSH command:
	I0920 17:36:34.179847  245557 main.go:141] libmachine: (addons-679190) DBG | exit 0
	I0920 17:36:34.306044  245557 main.go:141] libmachine: (addons-679190) DBG | SSH cmd err, output: <nil>: 
	I0920 17:36:34.306371  245557 main.go:141] libmachine: (addons-679190) KVM machine creation complete!
	I0920 17:36:34.306713  245557 main.go:141] libmachine: (addons-679190) Calling .GetConfigRaw
	I0920 17:36:34.307406  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:34.307658  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:34.307833  245557 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:36:34.307846  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:34.309410  245557 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:36:34.309438  245557 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:36:34.309444  245557 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:36:34.309450  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.312360  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.312741  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.312770  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.312993  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:34.313211  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.313408  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.313560  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:34.313751  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:34.314059  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:34.314074  245557 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:36:34.421222  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:36:34.421246  245557 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:36:34.421255  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.424519  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.424951  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.424984  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.425125  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:34.425370  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.425509  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.425630  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:34.425752  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:34.425952  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:34.425963  245557 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:36:34.534619  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:36:34.534731  245557 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:36:34.534745  245557 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:36:34.534753  245557 main.go:141] libmachine: (addons-679190) Calling .GetMachineName
	I0920 17:36:34.535038  245557 buildroot.go:166] provisioning hostname "addons-679190"
	I0920 17:36:34.535064  245557 main.go:141] libmachine: (addons-679190) Calling .GetMachineName
	I0920 17:36:34.535245  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.538122  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.538459  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.538489  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.538610  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:34.538795  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.538955  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.539101  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:34.539263  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:34.539465  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:34.539483  245557 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-679190 && echo "addons-679190" | sudo tee /etc/hostname
	I0920 17:36:34.663598  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-679190
	
	I0920 17:36:34.663632  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.666622  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.667078  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.667114  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.667316  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:34.667476  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.667667  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.667787  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:34.667933  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:34.668103  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:34.668119  245557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-679190' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-679190/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-679190' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:36:34.787041  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:36:34.787076  245557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 17:36:34.787136  245557 buildroot.go:174] setting up certificates
	I0920 17:36:34.787154  245557 provision.go:84] configureAuth start
	I0920 17:36:34.787172  245557 main.go:141] libmachine: (addons-679190) Calling .GetMachineName
	I0920 17:36:34.787485  245557 main.go:141] libmachine: (addons-679190) Calling .GetIP
	I0920 17:36:34.790870  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.791296  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.791324  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.791540  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.793848  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.794252  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.794283  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.794450  245557 provision.go:143] copyHostCerts
	I0920 17:36:34.794535  245557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 17:36:34.794685  245557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 17:36:34.794773  245557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 17:36:34.794847  245557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.addons-679190 san=[127.0.0.1 192.168.39.158 addons-679190 localhost minikube]
	I0920 17:36:34.890555  245557 provision.go:177] copyRemoteCerts
	I0920 17:36:34.890650  245557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:36:34.890686  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:34.893735  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.894102  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:34.894133  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:34.894315  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:34.894532  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:34.894715  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:34.894855  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:34.980634  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 17:36:35.005273  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:36:35.029188  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:36:35.052832  245557 provision.go:87] duration metric: took 265.657137ms to configureAuth
	I0920 17:36:35.052876  245557 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:36:35.053063  245557 config.go:182] Loaded profile config "addons-679190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:36:35.053145  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:35.056181  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.056518  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.056559  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.056787  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:35.056985  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.057136  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.057315  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:35.057524  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:35.057740  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:35.057756  245557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:36:35.573462  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:36:35.573493  245557 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:36:35.573502  245557 main.go:141] libmachine: (addons-679190) Calling .GetURL
	I0920 17:36:35.574853  245557 main.go:141] libmachine: (addons-679190) DBG | Using libvirt version 6000000
	I0920 17:36:35.576713  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.577033  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.577063  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.577214  245557 main.go:141] libmachine: Docker is up and running!
	I0920 17:36:35.577231  245557 main.go:141] libmachine: Reticulating splines...
	I0920 17:36:35.577240  245557 client.go:171] duration metric: took 21.079858169s to LocalClient.Create
	I0920 17:36:35.577264  245557 start.go:167] duration metric: took 21.079928938s to libmachine.API.Create "addons-679190"
	I0920 17:36:35.577275  245557 start.go:293] postStartSetup for "addons-679190" (driver="kvm2")
	I0920 17:36:35.577284  245557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:36:35.577302  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:35.577559  245557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:36:35.577583  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:35.579661  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.579997  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.580031  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.580129  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:35.580313  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.580436  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:35.580539  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:35.664189  245557 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:36:35.668353  245557 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:36:35.668386  245557 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 17:36:35.668464  245557 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 17:36:35.668487  245557 start.go:296] duration metric: took 91.20684ms for postStartSetup
	I0920 17:36:35.668527  245557 main.go:141] libmachine: (addons-679190) Calling .GetConfigRaw
	I0920 17:36:35.669134  245557 main.go:141] libmachine: (addons-679190) Calling .GetIP
	I0920 17:36:35.671946  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.672345  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.672368  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.672652  245557 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/config.json ...
	I0920 17:36:35.672885  245557 start.go:128] duration metric: took 21.194903618s to createHost
	I0920 17:36:35.672915  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:35.675216  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.675474  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.675498  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.675604  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:35.675764  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.675940  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.676046  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:35.676204  245557 main.go:141] libmachine: Using SSH client type: native
	I0920 17:36:35.676362  245557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0920 17:36:35.676372  245557 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:36:35.786755  245557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726853795.758756532
	
	I0920 17:36:35.786780  245557 fix.go:216] guest clock: 1726853795.758756532
	I0920 17:36:35.786799  245557 fix.go:229] Guest: 2024-09-20 17:36:35.758756532 +0000 UTC Remote: 2024-09-20 17:36:35.672900424 +0000 UTC m=+21.305727812 (delta=85.856108ms)
	I0920 17:36:35.786847  245557 fix.go:200] guest clock delta is within tolerance: 85.856108ms
	I0920 17:36:35.786854  245557 start.go:83] releasing machines lock for "addons-679190", held for 21.309019314s
	I0920 17:36:35.786901  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:35.787199  245557 main.go:141] libmachine: (addons-679190) Calling .GetIP
	I0920 17:36:35.790139  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.790527  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.790550  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.790715  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:35.791190  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:35.791390  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:35.791498  245557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:36:35.791545  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:35.791598  245557 ssh_runner.go:195] Run: cat /version.json
	I0920 17:36:35.791651  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:35.794437  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.794670  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.794822  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.794852  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.795016  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:35.795136  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:35.795161  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:35.795193  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.795310  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:35.795381  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:35.795460  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:35.795532  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:35.795596  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:35.795696  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:35.911918  245557 ssh_runner.go:195] Run: systemctl --version
	I0920 17:36:35.917670  245557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:36:36.074996  245557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:36:36.080814  245557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:36:36.080895  245557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:36:36.096152  245557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:36:36.096189  245557 start.go:495] detecting cgroup driver to use...
	I0920 17:36:36.096260  245557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:36:36.113653  245557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:36:36.128855  245557 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:36:36.128933  245557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:36:36.143261  245557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:36:36.157398  245557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:36:36.266690  245557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:36:36.425266  245557 docker.go:233] disabling docker service ...
	I0920 17:36:36.425347  245557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:36:36.446451  245557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:36:36.459829  245557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:36:36.571061  245557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:36:36.683832  245557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:36:36.698810  245557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:36:36.718244  245557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:36:36.718313  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.729705  245557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:36:36.729784  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.741247  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.752134  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.762794  245557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:36:36.773800  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.784266  245557 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.801953  245557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:36:36.812569  245557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:36:36.822394  245557 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:36:36.822468  245557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:36:36.835966  245557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:36:36.845803  245557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:36:36.958625  245557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:36:37.052231  245557 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:36:37.052346  245557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:36:37.057614  245557 start.go:563] Will wait 60s for crictl version
	I0920 17:36:37.057825  245557 ssh_runner.go:195] Run: which crictl
	I0920 17:36:37.061526  245557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:36:37.105824  245557 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:36:37.105959  245557 ssh_runner.go:195] Run: crio --version
	I0920 17:36:37.136539  245557 ssh_runner.go:195] Run: crio --version
	I0920 17:36:37.171796  245557 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:36:37.173345  245557 main.go:141] libmachine: (addons-679190) Calling .GetIP
	I0920 17:36:37.176324  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:37.176764  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:37.176792  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:37.177021  245557 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:36:37.181300  245557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:36:37.194040  245557 kubeadm.go:883] updating cluster {Name:addons-679190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-679190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:36:37.194155  245557 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:36:37.194199  245557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:36:37.225234  245557 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 17:36:37.225302  245557 ssh_runner.go:195] Run: which lz4
	I0920 17:36:37.229191  245557 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 17:36:37.233185  245557 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 17:36:37.233226  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 17:36:38.392285  245557 crio.go:462] duration metric: took 1.163136107s to copy over tarball
	I0920 17:36:38.392376  245557 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 17:36:40.499360  245557 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.106950323s)
	I0920 17:36:40.499391  245557 crio.go:469] duration metric: took 2.107072401s to extract the tarball
	I0920 17:36:40.499401  245557 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 17:36:40.535110  245557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:36:40.583829  245557 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:36:40.583859  245557 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:36:40.583871  245557 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.31.1 crio true true} ...
	I0920 17:36:40.584018  245557 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-679190 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-679190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:36:40.584106  245557 ssh_runner.go:195] Run: crio config
	I0920 17:36:40.641090  245557 cni.go:84] Creating CNI manager for ""
	I0920 17:36:40.641113  245557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 17:36:40.641123  245557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:36:40.641149  245557 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-679190 NodeName:addons-679190 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:36:40.641304  245557 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-679190"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:36:40.641382  245557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:36:40.652528  245557 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:36:40.652607  245557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 17:36:40.663453  245557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 17:36:40.681121  245557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:36:40.698855  245557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0920 17:36:40.717572  245557 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0920 17:36:40.721648  245557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:36:40.733213  245557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:36:40.847265  245557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:36:40.863856  245557 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190 for IP: 192.168.39.158
	I0920 17:36:40.863898  245557 certs.go:194] generating shared ca certs ...
	I0920 17:36:40.863925  245557 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:40.864134  245557 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 17:36:41.007978  245557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt ...
	I0920 17:36:41.008017  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt: {Name:mkbb1e3a51019c4e83406d8748ea8210552ea552 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.008221  245557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key ...
	I0920 17:36:41.008234  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key: {Name:mk2dcada8581decbc501b050c6a03f21e66e112a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.008308  245557 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 17:36:41.129733  245557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt ...
	I0920 17:36:41.129766  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt: {Name:mke04674cac70a8962a647c3804e5e99b455bf6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.129942  245557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key ...
	I0920 17:36:41.129953  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key: {Name:mkb6f1f78834acbea54fe32363e27f933f4228ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.130023  245557 certs.go:256] generating profile certs ...
	I0920 17:36:41.130084  245557 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.key
	I0920 17:36:41.130099  245557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt with IP's: []
	I0920 17:36:41.201155  245557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt ...
	I0920 17:36:41.201188  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: {Name:mk1833d3bbb2c8e05579222e591c1458c577f545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.201349  245557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.key ...
	I0920 17:36:41.201360  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.key: {Name:mkace5ffe93f144a352a62d890af2292b0d676e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.201423  245557 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key.83bf7d9f
	I0920 17:36:41.201440  245557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt.83bf7d9f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.158]
	I0920 17:36:41.370047  245557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt.83bf7d9f ...
	I0920 17:36:41.370080  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt.83bf7d9f: {Name:mkf5b06795843289171f8aec4b7922bbb13be891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.370249  245557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key.83bf7d9f ...
	I0920 17:36:41.370262  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key.83bf7d9f: {Name:mka349d2513fe2d14b9ca6aa0bfa8d7a73378d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.370335  245557 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt.83bf7d9f -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt
	I0920 17:36:41.370407  245557 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key.83bf7d9f -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key
	I0920 17:36:41.370452  245557 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.key
	I0920 17:36:41.370468  245557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.crt with IP's: []
	I0920 17:36:41.587021  245557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.crt ...
	I0920 17:36:41.587061  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.crt: {Name:mkfc4b71c33e958d6677e7223f0b780b75e49b3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.587221  245557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.key ...
	I0920 17:36:41.587234  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.key: {Name:mk3aa7527b80ede87bad50a2915cf2799293254d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:41.587394  245557 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:36:41.587429  245557 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 17:36:41.587456  245557 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:36:41.587475  245557 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 17:36:41.588059  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:36:41.613973  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:36:41.636373  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:36:41.669307  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 17:36:41.693224  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 17:36:41.716434  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:36:41.739030  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:36:41.761987  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:36:41.785735  245557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:36:41.808837  245557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:36:41.824917  245557 ssh_runner.go:195] Run: openssl version
	I0920 17:36:41.830533  245557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:36:41.841288  245557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:36:41.845628  245557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:36:41.845706  245557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:36:41.851639  245557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:36:41.864422  245557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:36:41.868781  245557 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:36:41.868858  245557 kubeadm.go:392] StartCluster: {Name:addons-679190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-679190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:36:41.868969  245557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 17:36:41.869033  245557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 17:36:41.908638  245557 cri.go:89] found id: ""
	I0920 17:36:41.908716  245557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:36:41.918913  245557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:36:41.929048  245557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:36:41.939489  245557 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:36:41.939517  245557 kubeadm.go:157] found existing configuration files:
	
	I0920 17:36:41.939604  245557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:36:41.948942  245557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:36:41.949013  245557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:36:41.958442  245557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:36:41.967545  245557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:36:41.967615  245557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:36:41.977594  245557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:36:41.987246  245557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:36:41.987350  245557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:36:41.997309  245557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:36:42.006453  245557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:36:42.006522  245557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:36:42.016044  245557 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 17:36:42.080202  245557 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 17:36:42.080363  245557 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 17:36:42.176051  245557 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 17:36:42.176190  245557 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 17:36:42.176291  245557 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 17:36:42.188037  245557 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 17:36:42.196848  245557 out.go:235]   - Generating certificates and keys ...
	I0920 17:36:42.196960  245557 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 17:36:42.197037  245557 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 17:36:42.434562  245557 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 17:36:42.521395  245557 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 17:36:42.607758  245557 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 17:36:42.669378  245557 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 17:36:42.904167  245557 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 17:36:42.904374  245557 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-679190 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0920 17:36:43.188202  245557 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 17:36:43.188434  245557 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-679190 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0920 17:36:43.287638  245557 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 17:36:43.473845  245557 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 17:36:43.593299  245557 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 17:36:43.593384  245557 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 17:36:43.987222  245557 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 17:36:44.336150  245557 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 17:36:44.457367  245557 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 17:36:44.695860  245557 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 17:36:44.844623  245557 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 17:36:44.845027  245557 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 17:36:44.847431  245557 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 17:36:44.849263  245557 out.go:235]   - Booting up control plane ...
	I0920 17:36:44.849358  245557 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 17:36:44.849439  245557 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 17:36:44.849514  245557 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 17:36:44.866081  245557 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 17:36:44.873618  245557 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 17:36:44.873725  245557 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 17:36:44.992494  245557 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 17:36:44.992682  245557 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 17:36:45.493964  245557 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.923125ms
	I0920 17:36:45.494050  245557 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 17:36:50.993341  245557 kubeadm.go:310] [api-check] The API server is healthy after 5.503314416s
	I0920 17:36:51.014477  245557 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 17:36:51.035005  245557 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 17:36:51.064511  245557 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 17:36:51.064710  245557 kubeadm.go:310] [mark-control-plane] Marking the node addons-679190 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 17:36:51.081848  245557 kubeadm.go:310] [bootstrap-token] Using token: r0jau5.grdtbm10vjda8jxv
	I0920 17:36:51.083289  245557 out.go:235]   - Configuring RBAC rules ...
	I0920 17:36:51.083448  245557 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 17:36:51.089533  245557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 17:36:51.109444  245557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 17:36:51.114960  245557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 17:36:51.119855  245557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 17:36:51.128234  245557 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 17:36:51.412359  245557 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 17:36:51.848915  245557 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 17:36:52.420530  245557 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 17:36:52.421383  245557 kubeadm.go:310] 
	I0920 17:36:52.421451  245557 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 17:36:52.421460  245557 kubeadm.go:310] 
	I0920 17:36:52.421602  245557 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 17:36:52.421621  245557 kubeadm.go:310] 
	I0920 17:36:52.421658  245557 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 17:36:52.421740  245557 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 17:36:52.421795  245557 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 17:36:52.421807  245557 kubeadm.go:310] 
	I0920 17:36:52.421870  245557 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 17:36:52.421881  245557 kubeadm.go:310] 
	I0920 17:36:52.421965  245557 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 17:36:52.421977  245557 kubeadm.go:310] 
	I0920 17:36:52.422055  245557 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 17:36:52.422173  245557 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 17:36:52.422286  245557 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 17:36:52.422300  245557 kubeadm.go:310] 
	I0920 17:36:52.422432  245557 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 17:36:52.422559  245557 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 17:36:52.422572  245557 kubeadm.go:310] 
	I0920 17:36:52.422674  245557 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r0jau5.grdtbm10vjda8jxv \
	I0920 17:36:52.422819  245557 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 17:36:52.422877  245557 kubeadm.go:310] 	--control-plane 
	I0920 17:36:52.422886  245557 kubeadm.go:310] 
	I0920 17:36:52.422961  245557 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 17:36:52.422970  245557 kubeadm.go:310] 
	I0920 17:36:52.423049  245557 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r0jau5.grdtbm10vjda8jxv \
	I0920 17:36:52.423143  245557 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 17:36:52.423925  245557 kubeadm.go:310] W0920 17:36:42.058037     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:36:52.424286  245557 kubeadm.go:310] W0920 17:36:42.059124     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:36:52.424412  245557 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:36:52.424454  245557 cni.go:84] Creating CNI manager for ""
	I0920 17:36:52.424467  245557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 17:36:52.426470  245557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 17:36:52.427945  245557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 17:36:52.438400  245557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 17:36:52.456765  245557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 17:36:52.456859  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:52.456882  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-679190 minikube.k8s.io/updated_at=2024_09_20T17_36_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=addons-679190 minikube.k8s.io/primary=true
	I0920 17:36:52.484735  245557 ops.go:34] apiserver oom_adj: -16
	I0920 17:36:52.608325  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:53.108755  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:53.609368  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:54.109047  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:54.608496  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:55.109057  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:55.608759  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:56.108486  245557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:36:56.176721  245557 kubeadm.go:1113] duration metric: took 3.719930405s to wait for elevateKubeSystemPrivileges
	I0920 17:36:56.176772  245557 kubeadm.go:394] duration metric: took 14.307920068s to StartCluster
	I0920 17:36:56.176799  245557 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:56.176943  245557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:36:56.177302  245557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:36:56.177559  245557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 17:36:56.177585  245557 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:36:56.177698  245557 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 17:36:56.177839  245557 addons.go:69] Setting yakd=true in profile "addons-679190"
	I0920 17:36:56.177853  245557 config.go:182] Loaded profile config "addons-679190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:36:56.177868  245557 addons.go:69] Setting metrics-server=true in profile "addons-679190"
	I0920 17:36:56.177883  245557 addons.go:234] Setting addon metrics-server=true in "addons-679190"
	I0920 17:36:56.177861  245557 addons.go:69] Setting inspektor-gadget=true in profile "addons-679190"
	I0920 17:36:56.177860  245557 addons.go:234] Setting addon yakd=true in "addons-679190"
	I0920 17:36:56.177925  245557 addons.go:234] Setting addon inspektor-gadget=true in "addons-679190"
	I0920 17:36:56.177941  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.177951  245557 addons.go:69] Setting registry=true in profile "addons-679190"
	I0920 17:36:56.177965  245557 addons.go:234] Setting addon registry=true in "addons-679190"
	I0920 17:36:56.177977  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.177987  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.177981  245557 addons.go:69] Setting default-storageclass=true in profile "addons-679190"
	I0920 17:36:56.178004  245557 addons.go:69] Setting storage-provisioner=true in profile "addons-679190"
	I0920 17:36:56.178010  245557 addons.go:69] Setting gcp-auth=true in profile "addons-679190"
	I0920 17:36:56.178020  245557 addons.go:234] Setting addon storage-provisioner=true in "addons-679190"
	I0920 17:36:56.178034  245557 addons.go:69] Setting cloud-spanner=true in profile "addons-679190"
	I0920 17:36:56.178041  245557 mustload.go:65] Loading cluster: addons-679190
	I0920 17:36:56.178050  245557 addons.go:234] Setting addon cloud-spanner=true in "addons-679190"
	I0920 17:36:56.178062  245557 addons.go:69] Setting volcano=true in profile "addons-679190"
	I0920 17:36:56.178075  245557 addons.go:234] Setting addon volcano=true in "addons-679190"
	I0920 17:36:56.178081  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178094  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178175  245557 addons.go:69] Setting ingress-dns=true in profile "addons-679190"
	I0920 17:36:56.178199  245557 addons.go:234] Setting addon ingress-dns=true in "addons-679190"
	I0920 17:36:56.178209  245557 config.go:182] Loaded profile config "addons-679190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:36:56.178245  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.177943  245557 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-679190"
	I0920 17:36:56.178273  245557 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-679190"
	I0920 17:36:56.178311  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178483  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178495  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178513  245557 addons.go:69] Setting volumesnapshots=true in profile "addons-679190"
	I0920 17:36:56.178526  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178532  245557 addons.go:234] Setting addon volumesnapshots=true in "addons-679190"
	I0920 17:36:56.178545  245557 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-679190"
	I0920 17:36:56.178483  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178588  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178533  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178643  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.177984  245557 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-679190"
	I0920 17:36:56.178689  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178694  245557 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-679190"
	I0920 17:36:56.178699  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178709  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178728  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178810  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178879  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178557  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.179064  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.178020  245557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-679190"
	I0920 17:36:56.179099  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.178495  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.179247  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.177991  245557 addons.go:69] Setting ingress=true in profile "addons-679190"
	I0920 17:36:56.179277  245557 addons.go:234] Setting addon ingress=true in "addons-679190"
	I0920 17:36:56.179294  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.179319  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.177934  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178678  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.179597  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178052  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.178588  245557 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-679190"
	I0920 17:36:56.180168  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.180484  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.180521  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.180557  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.180594  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.180778  245557 out.go:177] * Verifying Kubernetes components...
	I0920 17:36:56.182381  245557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:36:56.198770  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38321
	I0920 17:36:56.199048  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0920 17:36:56.199209  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34951
	I0920 17:36:56.199459  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.199673  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.199690  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.199783  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36793
	I0920 17:36:56.199983  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.200000  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.200323  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.200418  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.200444  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.200491  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.200938  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.200956  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.201021  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.201085  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.201102  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.201191  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.201335  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.201376  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.201425  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.201697  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.201767  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I0920 17:36:56.202320  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.202366  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.202504  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.202548  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.202622  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.203097  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.203117  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.203410  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.203946  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.203983  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.206753  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.206798  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.207297  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.207335  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.208443  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.208479  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.211454  245557 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-679190"
	I0920 17:36:56.211505  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.211883  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.211928  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.216373  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0920 17:36:56.216884  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.217519  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.217551  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.217867  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.218517  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.218557  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.220437  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I0920 17:36:56.220844  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.221299  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.221320  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.221697  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.222270  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.222325  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.232194  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0920 17:36:56.232900  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.233545  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.233577  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.233996  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.234205  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.235969  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.236411  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.236459  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.241821  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0920 17:36:56.242381  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.242943  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.242972  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.243334  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.243565  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.246291  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0920 17:36:56.246829  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.247399  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.247419  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.247811  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.248026  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I0920 17:36:56.248056  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.248707  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37839
	I0920 17:36:56.249331  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.249815  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.249832  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.250336  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.250958  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.251000  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.251218  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45981
	I0920 17:36:56.251950  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.252204  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I0920 17:36:56.254426  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.255080  245557 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 17:36:56.256399  245557 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:36:56.256419  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 17:36:56.256441  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.256553  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38415
	I0920 17:36:56.257740  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0920 17:36:56.257771  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0920 17:36:56.257868  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.257981  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.258005  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.258066  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.258116  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.258582  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.258601  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.258760  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.258780  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.258948  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.258963  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.259056  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.259091  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.259220  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.259239  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.259287  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.259455  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.259471  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.259756  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.259781  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.259825  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.259847  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46059
	I0920 17:36:56.259938  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.259979  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.260044  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.260089  245557 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 17:36:56.260188  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.260211  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.260684  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.260693  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.261178  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.261337  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.261375  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.261393  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.261456  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.261758  245557 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 17:36:56.261779  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 17:36:56.261799  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.262451  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.263727  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.263750  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.264542  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.264766  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.265197  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.265267  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.265425  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.265871  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.266255  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.266942  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.266966  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.267051  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.267450  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.267450  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.267501  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.267624  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.267627  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.267797  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.267885  245557 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 17:36:56.267955  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.268130  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.268477  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.268931  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:36:56.268949  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:36:56.269155  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.269223  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:36:56.269254  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:36:56.269261  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:36:56.269269  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:36:56.269276  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:36:56.270959  245557 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 17:36:56.272219  245557 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 17:36:56.272238  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 17:36:56.272258  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.272345  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:36:56.272373  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:36:56.272384  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 17:36:56.272473  245557 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 17:36:56.272957  245557 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 17:36:56.274416  245557 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 17:36:56.274435  245557 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 17:36:56.274458  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.278501  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.278771  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.278948  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.278966  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.279120  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.279308  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.279334  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.279356  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.279460  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.279524  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.279789  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.279989  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.280167  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.280659  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.283752  245557 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0920 17:36:56.285194  245557 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 17:36:56.285213  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 17:36:56.285236  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.287357  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46099
	I0920 17:36:56.288285  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.288568  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I0920 17:36:56.289501  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.289586  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.289657  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.289686  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.290284  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.290294  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.290306  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.290351  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.290365  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.290723  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.290770  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.290785  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.290986  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.291421  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.291439  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.291683  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.291884  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.292442  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0920 17:36:56.292861  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.293333  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.293359  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.293708  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.294249  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.294285  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.296604  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0920 17:36:56.297114  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.297352  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33903
	I0920 17:36:56.297691  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.297708  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.297880  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.298156  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.300334  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I0920 17:36:56.300338  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.300443  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.300462  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.300908  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.301356  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.301512  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.301528  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.301592  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.301994  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.302179  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.302935  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.304975  245557 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 17:36:56.305417  245557 addons.go:234] Setting addon default-storageclass=true in "addons-679190"
	I0920 17:36:56.305487  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:36:56.305884  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.305971  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.306205  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.306459  245557 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 17:36:56.306481  245557 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 17:36:56.306510  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.308001  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 17:36:56.309187  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41931
	I0920 17:36:56.309544  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 17:36:56.309565  245557 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 17:36:56.309594  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.309653  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38235
	I0920 17:36:56.310121  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.310680  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.310703  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.310778  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.311300  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.311383  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.311411  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.311552  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.311837  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.312758  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.312848  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.313699  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.314421  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.314691  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.314716  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.314740  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.314836  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.315154  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.315267  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.315378  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.317370  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.317387  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.317414  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37403
	I0920 17:36:56.317504  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37673
	I0920 17:36:56.317566  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.318020  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.318021  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.318113  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.318435  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.318527  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.318548  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.318565  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.318581  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.318605  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.318834  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35473
	I0920 17:36:56.318913  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.319073  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.319167  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.319300  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.319543  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.319811  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.319835  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.319903  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 17:36:56.320366  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.320592  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.321679  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.321733  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.322512  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.322733  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 17:36:56.323531  245557 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 17:36:56.323540  245557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 17:36:56.324345  245557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:36:56.325100  245557 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 17:36:56.325122  245557 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 17:36:56.325140  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.326093  245557 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:36:56.326112  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:36:56.326130  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.326232  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 17:36:56.326618  245557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:36:56.328288  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 17:36:56.328983  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.329264  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.329496  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.329527  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.329672  245557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:36:56.329714  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.329725  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.329730  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.329856  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.329944  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.330097  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.330102  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.330283  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.330461  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.330467  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.331167  245557 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 17:36:56.331190  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 17:36:56.331208  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.331686  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 17:36:56.333392  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 17:36:56.334233  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I0920 17:36:56.334709  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.334787  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.335248  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.335265  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.335335  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.335354  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.335411  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.335560  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.335619  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.335685  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.335796  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.335834  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.336064  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 17:36:56.337255  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.338333  245557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 17:36:56.338361  245557 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 17:36:56.338435  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36195
	I0920 17:36:56.338790  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.339334  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.339351  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.339451  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 17:36:56.339481  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 17:36:56.339503  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.339705  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.340127  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:36:56.340219  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:36:56.342208  245557 out.go:177]   - Using image docker.io/busybox:stable
	I0920 17:36:56.342943  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.343386  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.343420  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.343599  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.343794  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.343962  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.344071  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:36:56.344485  245557 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:36:56.344506  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 17:36:56.344523  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.347682  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.348119  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.348141  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.348318  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.348498  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.348620  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.348730  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	W0920 17:36:56.350081  245557 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37614->192.168.39.158:22: read: connection reset by peer
	I0920 17:36:56.350113  245557 retry.go:31] will retry after 277.419822ms: ssh: handshake failed: read tcp 192.168.39.1:37614->192.168.39.158:22: read: connection reset by peer
	I0920 17:36:56.358579  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
	I0920 17:36:56.359069  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:36:56.359542  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:36:56.359571  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:36:56.359910  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:36:56.360078  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:36:56.361619  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:36:56.361824  245557 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:36:56.361842  245557 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:36:56.361860  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:36:56.364857  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.365235  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:36:56.365271  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:36:56.365432  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:36:56.365644  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:36:56.365803  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:36:56.365981  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	W0920 17:36:56.368948  245557 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37630->192.168.39.158:22: read: connection reset by peer
	I0920 17:36:56.368974  245557 retry.go:31] will retry after 189.220194ms: ssh: handshake failed: read tcp 192.168.39.1:37630->192.168.39.158:22: read: connection reset by peer
	I0920 17:36:56.558562  245557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:36:56.558883  245557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:36:56.674915  245557 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 17:36:56.674949  245557 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 17:36:56.736424  245557 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 17:36:56.736462  245557 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 17:36:56.738918  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 17:36:56.740403  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:36:56.779127  245557 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 17:36:56.779166  245557 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 17:36:56.785790  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:36:56.816546  245557 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 17:36:56.816572  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 17:36:56.818607  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 17:36:56.833977  245557 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 17:36:56.834015  245557 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 17:36:56.925219  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:36:56.958576  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 17:36:57.000786  245557 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:36:57.000810  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 17:36:57.009273  245557 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 17:36:57.009317  245557 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 17:36:57.024743  245557 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 17:36:57.024770  245557 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 17:36:57.043071  245557 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 17:36:57.043099  245557 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 17:36:57.092905  245557 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 17:36:57.092942  245557 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 17:36:57.127201  245557 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 17:36:57.127236  245557 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 17:36:57.158480  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:36:57.178499  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:36:57.215557  245557 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 17:36:57.215592  245557 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 17:36:57.238838  245557 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 17:36:57.238870  245557 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 17:36:57.247948  245557 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:36:57.247973  245557 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 17:36:57.272793  245557 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 17:36:57.272831  245557 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 17:36:57.292572  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 17:36:57.292600  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 17:36:57.414471  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 17:36:57.414500  245557 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 17:36:57.441852  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:36:57.459384  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 17:36:57.459417  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 17:36:57.459605  245557 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:36:57.459635  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 17:36:57.487179  245557 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 17:36:57.487211  245557 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 17:36:57.600664  245557 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:36:57.600691  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 17:36:57.665586  245557 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 17:36:57.665618  245557 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 17:36:57.669993  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:36:57.692412  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 17:36:57.692454  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 17:36:57.777267  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:36:57.878278  245557 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:36:57.878309  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 17:36:57.884855  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 17:36:57.884886  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 17:36:57.939139  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:36:58.166138  245557 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 17:36:58.166167  245557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 17:36:58.648447  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 17:36:58.648486  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 17:36:58.703324  245557 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.144399607s)
	I0920 17:36:58.703358  245557 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.144764949s)
	I0920 17:36:58.703371  245557 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 17:36:58.704166  245557 node_ready.go:35] waiting up to 6m0s for node "addons-679190" to be "Ready" ...
	I0920 17:36:58.710465  245557 node_ready.go:49] node "addons-679190" has status "Ready":"True"
	I0920 17:36:58.710493  245557 node_ready.go:38] duration metric: took 6.288327ms for node "addons-679190" to be "Ready" ...
	I0920 17:36:58.710503  245557 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:36:58.723116  245557 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace to be "Ready" ...
	I0920 17:36:59.028902  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 17:36:59.028955  245557 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 17:36:59.192793  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 17:36:59.192824  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 17:36:59.212433  245557 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-679190" context rescaled to 1 replicas
	I0920 17:36:59.496357  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 17:36:59.496454  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 17:36:59.712928  245557 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:36:59.712961  245557 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 17:37:00.075273  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:37:00.759434  245557 pod_ready.go:103] pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:02.878793  245557 pod_ready.go:103] pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:03.327281  245557 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 17:37:03.327326  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:37:03.331115  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:37:03.331744  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:37:03.331780  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:37:03.332019  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:37:03.332249  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:37:03.332520  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:37:03.332731  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:37:03.544424  245557 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 17:37:03.611249  245557 addons.go:234] Setting addon gcp-auth=true in "addons-679190"
	I0920 17:37:03.611311  245557 host.go:66] Checking if "addons-679190" exists ...
	I0920 17:37:03.611651  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:37:03.611695  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:37:03.627843  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35373
	I0920 17:37:03.628403  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:37:03.628939  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:37:03.628963  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:37:03.629370  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:37:03.629868  245557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:37:03.629917  245557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:37:03.647166  245557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I0920 17:37:03.647674  245557 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:37:03.648222  245557 main.go:141] libmachine: Using API Version  1
	I0920 17:37:03.648244  245557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:37:03.648605  245557 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:37:03.648924  245557 main.go:141] libmachine: (addons-679190) Calling .GetState
	I0920 17:37:03.650642  245557 main.go:141] libmachine: (addons-679190) Calling .DriverName
	I0920 17:37:03.650881  245557 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 17:37:03.650914  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHHostname
	I0920 17:37:03.653472  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:37:03.653934  245557 main.go:141] libmachine: (addons-679190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:27:d9", ip: ""} in network mk-addons-679190: {Iface:virbr1 ExpiryTime:2024-09-20 18:36:29 +0000 UTC Type:0 Mac:52:54:00:40:27:d9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-679190 Clientid:01:52:54:00:40:27:d9}
	I0920 17:37:03.653975  245557 main.go:141] libmachine: (addons-679190) DBG | domain addons-679190 has defined IP address 192.168.39.158 and MAC address 52:54:00:40:27:d9 in network mk-addons-679190
	I0920 17:37:03.654165  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHPort
	I0920 17:37:03.654379  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHKeyPath
	I0920 17:37:03.654559  245557 main.go:141] libmachine: (addons-679190) Calling .GetSSHUsername
	I0920 17:37:03.654756  245557 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/addons-679190/id_rsa Username:docker}
	I0920 17:37:04.200814  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.461859921s)
	I0920 17:37:04.200874  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.200887  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.200907  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.460468965s)
	I0920 17:37:04.200955  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.200972  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201021  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.41520737s)
	I0920 17:37:04.201047  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201055  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201068  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.382435723s)
	I0920 17:37:04.201090  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201101  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201151  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.275909746s)
	I0920 17:37:04.201167  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201168  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.242568548s)
	I0920 17:37:04.201174  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201183  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201191  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201230  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.042717172s)
	I0920 17:37:04.201247  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201255  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201259  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.022724165s)
	I0920 17:37:04.201276  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201286  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201348  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.759469305s)
	I0920 17:37:04.201367  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201375  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201450  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.5314256s)
	I0920 17:37:04.201467  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201476  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201559  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.201567  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.4242534s)
	I0920 17:37:04.201598  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	W0920 17:37:04.201608  245557 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:37:04.201637  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.201647  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.201647  245557 retry.go:31] will retry after 372.12607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:37:04.201655  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201665  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201575  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.201725  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201731  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201734  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.262556719s)
	I0920 17:37:04.201759  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201768  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.201842  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.201915  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.201932  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.201952  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.201961  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.201965  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.201970  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.201977  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.201983  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.202041  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.202065  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.202070  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.202077  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.202082  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.202642  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.202677  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.202684  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.202691  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.202698  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.204814  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.204838  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.204848  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.204856  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.204860  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.204874  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.204885  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.204897  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.204919  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.204946  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.204956  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.204978  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.205011  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.205017  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.205297  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.205350  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.205649  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.205667  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.205737  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.205747  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.205816  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.205822  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.205830  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.205837  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.205890  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.205929  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.205935  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.205943  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.205949  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.207122  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207136  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207181  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207197  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207230  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.207298  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207306  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207316  245557 addons.go:475] Verifying addon metrics-server=true in "addons-679190"
	I0920 17:37:04.207442  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.207454  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.207468  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207476  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207476  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207484  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207485  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.207492  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.207513  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207530  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207619  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.207638  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.207746  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:04.207768  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.207775  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.207784  245557 addons.go:475] Verifying addon registry=true in "addons-679190"
	I0920 17:37:04.209273  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.209289  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.209299  245557 addons.go:475] Verifying addon ingress=true in "addons-679190"
	I0920 17:37:04.210060  245557 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-679190 service yakd-dashboard -n yakd-dashboard
	
	I0920 17:37:04.210094  245557 out.go:177] * Verifying registry addon...
	I0920 17:37:04.211026  245557 out.go:177] * Verifying ingress addon...
	I0920 17:37:04.214177  245557 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 17:37:04.214180  245557 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 17:37:04.219984  245557 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 17:37:04.220012  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:04.232040  245557 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 17:37:04.232063  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:04.250786  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.250821  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.251111  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.251130  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 17:37:04.251227  245557 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 17:37:04.260835  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:04.260869  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:04.261164  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:04.261183  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:04.574205  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:37:04.725222  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:04.729585  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:05.249467  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:05.249466  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:05.283804  245557 pod_ready.go:103] pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:05.471390  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.396046848s)
	I0920 17:37:05.471473  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:05.471495  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:05.471416  245557 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.820510198s)
	I0920 17:37:05.471936  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:05.471953  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:05.471964  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:05.471971  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:05.472409  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:05.472432  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:05.472435  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:05.472454  245557 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-679190"
	I0920 17:37:05.473667  245557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:37:05.474639  245557 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 17:37:05.476343  245557 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 17:37:05.477417  245557 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 17:37:05.477751  245557 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 17:37:05.477771  245557 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 17:37:05.501716  245557 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 17:37:05.501756  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:05.627006  245557 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 17:37:05.627047  245557 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 17:37:05.723361  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:05.729929  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:05.777291  245557 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:37:05.777327  245557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 17:37:05.793054  245557 pod_ready.go:93] pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:05.793083  245557 pod_ready.go:82] duration metric: took 7.069936594s for pod "coredns-7c65d6cfc9-dsxdk" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.793096  245557 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jln6k" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.808093  245557 pod_ready.go:93] pod "coredns-7c65d6cfc9-jln6k" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:05.808122  245557 pod_ready.go:82] duration metric: took 15.016714ms for pod "coredns-7c65d6cfc9-jln6k" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.808135  245557 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.815411  245557 pod_ready.go:93] pod "etcd-addons-679190" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:05.815439  245557 pod_ready.go:82] duration metric: took 7.295923ms for pod "etcd-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.815451  245557 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.826707  245557 pod_ready.go:93] pod "kube-apiserver-addons-679190" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:05.826733  245557 pod_ready.go:82] duration metric: took 11.271544ms for pod "kube-apiserver-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.826746  245557 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.832864  245557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:37:05.843767  245557 pod_ready.go:93] pod "kube-controller-manager-addons-679190" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:05.843804  245557 pod_ready.go:82] duration metric: took 17.048824ms for pod "kube-controller-manager-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.843818  245557 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-klvxz" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:05.983081  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:06.137818  245557 pod_ready.go:93] pod "kube-proxy-klvxz" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:06.137858  245557 pod_ready.go:82] duration metric: took 294.032966ms for pod "kube-proxy-klvxz" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:06.137870  245557 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:06.226275  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:06.226546  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:06.672283  245557 pod_ready.go:93] pod "kube-scheduler-addons-679190" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:06.672311  245557 pod_ready.go:82] duration metric: took 534.434193ms for pod "kube-scheduler-addons-679190" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:06.672322  245557 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:06.676924  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:06.723323  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:06.723483  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:06.996072  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.421807501s)
	I0920 17:37:06.996136  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:06.996154  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:06.996393  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:06.996417  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:06.996426  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:06.996434  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:06.996451  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:06.996683  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:06.996693  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:06.996709  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:07.016129  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:07.083780  245557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.250840857s)
	I0920 17:37:07.083874  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:07.083897  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:07.084188  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:07.084212  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:07.084223  245557 main.go:141] libmachine: Making call to close driver server
	I0920 17:37:07.084231  245557 main.go:141] libmachine: (addons-679190) Calling .Close
	I0920 17:37:07.084473  245557 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:37:07.084497  245557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:37:07.084529  245557 main.go:141] libmachine: (addons-679190) DBG | Closing plugin on server side
	I0920 17:37:07.086428  245557 addons.go:475] Verifying addon gcp-auth=true in "addons-679190"
	I0920 17:37:07.089016  245557 out.go:177] * Verifying gcp-auth addon...
	I0920 17:37:07.091518  245557 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 17:37:07.134782  245557 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 17:37:07.134813  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:07.235817  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:07.236611  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:07.488734  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:07.595170  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:07.721622  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:07.723013  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:07.986730  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:08.097723  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:08.235499  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:08.236709  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:08.484933  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:08.595349  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:08.679620  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:08.720021  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:08.720047  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:08.981981  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:09.095366  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:09.219800  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:09.220283  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:09.482502  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:09.596169  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:09.718911  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:09.719167  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:09.981992  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:10.095430  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:10.218531  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:10.218998  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:10.482432  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:10.597590  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:10.965954  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:10.966262  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:11.066406  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:11.095561  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:11.179050  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:11.219287  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:11.219338  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:11.482288  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:11.595743  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:11.718737  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:11.720102  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:11.983121  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:12.096110  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:12.218665  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:12.219012  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:12.481993  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:12.595323  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:12.719728  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:12.719803  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:12.983136  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:13.095240  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:13.219271  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:13.220586  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:13.482367  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:13.594980  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:13.679290  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:13.719607  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:13.719858  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:13.982406  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:14.096116  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:14.218143  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:14.218348  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:14.481878  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:14.595473  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:14.718781  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:14.719599  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:14.983016  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:15.098359  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:15.218446  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:15.219530  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:15.482914  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:15.596240  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:15.680026  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:15.718767  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:15.719239  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:15.982692  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:16.097226  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:16.218654  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:16.219309  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:16.482137  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:16.595706  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:16.719296  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:16.719734  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:16.981704  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:17.095888  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:17.218140  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:17.219734  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:17.481807  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:17.596056  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:17.720484  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:17.720855  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:17.982237  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:18.095526  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:18.179237  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:18.219858  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:18.220498  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:18.482532  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:18.595184  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:18.719021  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:18.719806  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:18.982493  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:19.096098  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:19.218547  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:19.219496  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:19.482320  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:19.595193  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:19.719166  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:19.720319  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:19.982325  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:20.367324  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:20.367356  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:20.367706  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:20.370491  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:20.482782  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:20.595136  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:20.718920  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:20.719192  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:20.981947  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:21.095534  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:21.218869  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:21.219583  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:21.482550  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:21.595874  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:21.719021  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:21.719268  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:21.982430  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:22.095030  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:22.219384  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:22.219891  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:22.482224  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:22.595405  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:22.679943  245557 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"False"
	I0920 17:37:22.718958  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:22.719141  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:22.982733  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:23.096227  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:23.219788  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:23.220067  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:23.483912  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:23.595388  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:23.718637  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:23.719016  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:23.982130  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:24.095662  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:24.218721  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:24.219057  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:24.482535  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:24.595793  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:24.678781  245557 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace has status "Ready":"True"
	I0920 17:37:24.678812  245557 pod_ready.go:82] duration metric: took 18.006481882s for pod "nvidia-device-plugin-daemonset-b5wj9" in "kube-system" namespace to be "Ready" ...
	I0920 17:37:24.678822  245557 pod_ready.go:39] duration metric: took 25.968303705s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:37:24.678872  245557 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:37:24.678948  245557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:37:24.702218  245557 api_server.go:72] duration metric: took 28.524587153s to wait for apiserver process to appear ...
	I0920 17:37:24.702254  245557 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:37:24.702293  245557 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0920 17:37:24.706595  245557 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0920 17:37:24.707660  245557 api_server.go:141] control plane version: v1.31.1
	I0920 17:37:24.707685  245557 api_server.go:131] duration metric: took 5.422585ms to wait for apiserver health ...
	I0920 17:37:24.707694  245557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:37:24.715504  245557 system_pods.go:59] 17 kube-system pods found
	I0920 17:37:24.715541  245557 system_pods.go:61] "coredns-7c65d6cfc9-dsxdk" [3371b6ad-8f6e-4474-a677-f07c0b4e0a38] Running
	I0920 17:37:24.715552  245557 system_pods.go:61] "csi-hostpath-attacher-0" [1630eb87-6fea-4510-8b0d-cb108c179963] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:37:24.715563  245557 system_pods.go:61] "csi-hostpath-resizer-0" [9b98474b-c72c-4230-973c-a76ed4f731c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:37:24.715573  245557 system_pods.go:61] "csi-hostpathplugin-9m9gc" [00f39caf-3478-4abb-922e-28239885d7bf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:37:24.715580  245557 system_pods.go:61] "etcd-addons-679190" [4d3ed97e-c5a9-4017-86dd-68689e55e1f0] Running
	I0920 17:37:24.715586  245557 system_pods.go:61] "kube-apiserver-addons-679190" [24fac84a-44d2-4e96-8680-606874e6b5bb] Running
	I0920 17:37:24.715591  245557 system_pods.go:61] "kube-controller-manager-addons-679190" [5dec7f61-8787-4c49-8f6f-998e2dbc01cb] Running
	I0920 17:37:24.715597  245557 system_pods.go:61] "kube-ingress-dns-minikube" [1a3b7852-a919-4f95-9e5c-20ead0de76ad] Running
	I0920 17:37:24.715603  245557 system_pods.go:61] "kube-proxy-klvxz" [6edcd5de-35eb-4e5b-8073-e2a49428b300] Running
	I0920 17:37:24.715609  245557 system_pods.go:61] "kube-scheduler-addons-679190" [143ec669-777c-495c-80f3-a792643b75e8] Running
	I0920 17:37:24.715619  245557 system_pods.go:61] "metrics-server-84c5f94fbc-fj4mf" [adb63308-9d43-444e-b31b-a5efeef5d323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:37:24.715625  245557 system_pods.go:61] "nvidia-device-plugin-daemonset-b5wj9" [eb9faaf1-05e4-4f88-abbb-479f222d2664] Running
	I0920 17:37:24.715637  245557 system_pods.go:61] "registry-66c9cd494c-7g6lm" [4ad8ab0b-f43b-475a-984c-11d2a23963c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 17:37:24.715646  245557 system_pods.go:61] "registry-proxy-k96rm" [0612b678-15da-44d6-acfb-c29dd8dd2b7d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 17:37:24.715678  245557 system_pods.go:61] "snapshot-controller-56fcc65765-5qmkt" [a4b880c1-bd61-45bb-80cf-ae1dd6af7d4e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:37:24.715689  245557 system_pods.go:61] "snapshot-controller-56fcc65765-cwbl2" [6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:37:24.715696  245557 system_pods.go:61] "storage-provisioner" [339440d6-4355-4e26-a436-2edefb4d7b9d] Running
	I0920 17:37:24.715707  245557 system_pods.go:74] duration metric: took 8.003633ms to wait for pod list to return data ...
	I0920 17:37:24.715719  245557 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:37:24.719187  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:24.719879  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:24.721134  245557 default_sa.go:45] found service account: "default"
	I0920 17:37:24.721158  245557 default_sa.go:55] duration metric: took 5.428135ms for default service account to be created ...
	I0920 17:37:24.721168  245557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:37:24.730946  245557 system_pods.go:86] 17 kube-system pods found
	I0920 17:37:24.730977  245557 system_pods.go:89] "coredns-7c65d6cfc9-dsxdk" [3371b6ad-8f6e-4474-a677-f07c0b4e0a38] Running
	I0920 17:37:24.730988  245557 system_pods.go:89] "csi-hostpath-attacher-0" [1630eb87-6fea-4510-8b0d-cb108c179963] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:37:24.730995  245557 system_pods.go:89] "csi-hostpath-resizer-0" [9b98474b-c72c-4230-973c-a76ed4f731c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:37:24.731005  245557 system_pods.go:89] "csi-hostpathplugin-9m9gc" [00f39caf-3478-4abb-922e-28239885d7bf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:37:24.731009  245557 system_pods.go:89] "etcd-addons-679190" [4d3ed97e-c5a9-4017-86dd-68689e55e1f0] Running
	I0920 17:37:24.731014  245557 system_pods.go:89] "kube-apiserver-addons-679190" [24fac84a-44d2-4e96-8680-606874e6b5bb] Running
	I0920 17:37:24.731017  245557 system_pods.go:89] "kube-controller-manager-addons-679190" [5dec7f61-8787-4c49-8f6f-998e2dbc01cb] Running
	I0920 17:37:24.731021  245557 system_pods.go:89] "kube-ingress-dns-minikube" [1a3b7852-a919-4f95-9e5c-20ead0de76ad] Running
	I0920 17:37:24.731024  245557 system_pods.go:89] "kube-proxy-klvxz" [6edcd5de-35eb-4e5b-8073-e2a49428b300] Running
	I0920 17:37:24.731027  245557 system_pods.go:89] "kube-scheduler-addons-679190" [143ec669-777c-495c-80f3-a792643b75e8] Running
	I0920 17:37:24.731031  245557 system_pods.go:89] "metrics-server-84c5f94fbc-fj4mf" [adb63308-9d43-444e-b31b-a5efeef5d323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:37:24.731036  245557 system_pods.go:89] "nvidia-device-plugin-daemonset-b5wj9" [eb9faaf1-05e4-4f88-abbb-479f222d2664] Running
	I0920 17:37:24.731041  245557 system_pods.go:89] "registry-66c9cd494c-7g6lm" [4ad8ab0b-f43b-475a-984c-11d2a23963c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 17:37:24.731047  245557 system_pods.go:89] "registry-proxy-k96rm" [0612b678-15da-44d6-acfb-c29dd8dd2b7d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 17:37:24.731053  245557 system_pods.go:89] "snapshot-controller-56fcc65765-5qmkt" [a4b880c1-bd61-45bb-80cf-ae1dd6af7d4e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:37:24.731061  245557 system_pods.go:89] "snapshot-controller-56fcc65765-cwbl2" [6d673e8e-a45e-4e11-a0cb-08d2f7a89a7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:37:24.731065  245557 system_pods.go:89] "storage-provisioner" [339440d6-4355-4e26-a436-2edefb4d7b9d] Running
	I0920 17:37:24.731073  245557 system_pods.go:126] duration metric: took 9.894741ms to wait for k8s-apps to be running ...
	I0920 17:37:24.731083  245557 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:37:24.731128  245557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:37:24.756246  245557 system_svc.go:56] duration metric: took 25.149435ms WaitForService to wait for kubelet
	I0920 17:37:24.756281  245557 kubeadm.go:582] duration metric: took 28.578660436s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:37:24.756309  245557 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:37:24.759977  245557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:37:24.760008  245557 node_conditions.go:123] node cpu capacity is 2
	I0920 17:37:24.760024  245557 node_conditions.go:105] duration metric: took 3.709037ms to run NodePressure ...
	I0920 17:37:24.760039  245557 start.go:241] waiting for startup goroutines ...
	I0920 17:37:24.982102  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:25.692769  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:25.692898  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:25.693075  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:25.695274  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:25.792088  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:25.792245  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:25.792632  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:25.985189  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:26.096256  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:26.218968  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:26.219398  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:26.483148  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:26.596907  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:26.720942  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:26.723460  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:26.984520  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:27.096894  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:27.220406  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:27.220769  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:27.484078  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:27.595695  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:27.720606  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:27.721639  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:27.982903  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:28.095938  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:28.219987  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:28.220971  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:28.481389  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:28.610426  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:28.719978  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:28.720142  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:28.983078  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:29.095989  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:29.218709  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:29.218930  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:29.482101  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:29.595615  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:29.719088  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:29.719199  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:29.982791  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:30.095599  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:30.217922  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:30.218986  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:30.482436  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:30.595942  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:30.718190  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:30.719931  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:30.981672  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:31.095251  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:31.219963  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:31.221121  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:31.482257  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:31.595251  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:31.720133  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:31.720333  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:31.982198  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:32.096412  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:32.219862  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:32.219953  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:32.482618  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:32.594901  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:32.719007  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:32.719260  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:32.982391  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:33.401207  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:33.401585  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:33.401765  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:33.483109  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:33.596749  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:33.720837  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:33.721022  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:33.982168  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:34.096308  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:34.218951  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:34.219384  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:34.482769  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:34.598370  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:34.720347  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:34.720439  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:34.982322  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:35.095917  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:35.219540  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:35.219941  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:35.487140  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:35.594855  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:35.718904  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:35.720766  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:35.982409  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:36.095826  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:36.219811  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:36.220538  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:36.482003  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:36.594821  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:36.719933  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:36.720068  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:36.981755  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:37.095191  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:37.219188  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:37.219358  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:37.659063  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:37.661446  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:37.720179  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:37.721456  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:37.982458  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:38.094851  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:38.218310  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:38.220085  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:38.483023  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:38.594895  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:38.722426  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:38.725839  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:38.982505  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:39.098447  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:39.218837  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:39.218838  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:39.481811  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:39.595792  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:39.718814  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:39.719320  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:39.982909  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:40.095985  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:40.218489  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:40.219278  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:40.481967  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:40.595737  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:40.718910  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:40.719238  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:40.983283  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:41.095940  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:41.219565  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:41.221013  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:41.482435  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:41.595334  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:41.720647  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:41.720684  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:41.983153  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:42.094768  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:42.220086  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:42.220394  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:42.482518  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:42.595381  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:42.720206  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:42.720419  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:42.983132  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:43.095176  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:43.219302  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:43.219642  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:43.482642  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:43.595409  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:43.721373  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:43.721632  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:43.982605  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:44.098347  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:44.219089  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:44.221006  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:44.482252  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:44.595145  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:44.719105  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:44.719297  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:44.982659  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:45.095138  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:45.219384  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:45.220254  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:45.482221  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:45.595501  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:45.718470  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:45.719300  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:45.982785  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:46.095155  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:46.219224  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:46.219462  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:46.483020  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:46.595174  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:46.719798  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:46.720503  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:46.983110  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:47.095113  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:47.219161  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:47.219467  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:47.482526  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:47.596127  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:47.719708  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:47.722386  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:47.983136  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:48.095553  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:48.219791  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:48.220277  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:48.482138  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:48.595891  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:48.719594  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:48.719903  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:48.983736  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:49.095768  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:49.218264  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:49.218364  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:49.482328  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:49.594924  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:49.720709  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:49.721038  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:49.984147  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:50.095908  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:50.218641  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:50.219557  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:50.482216  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:50.595636  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:50.718073  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:50.718453  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:50.982477  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:51.096470  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:51.218737  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:51.219071  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:51.482552  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:51.594846  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:51.719591  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:51.719929  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:51.982403  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:52.094835  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:52.219094  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:52.219308  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:52.505584  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:52.595700  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:52.722416  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:52.722934  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:52.982125  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:53.095783  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:53.219934  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:53.220724  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:53.481962  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:53.595411  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:53.719033  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:53.719514  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:53.983809  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:54.104151  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:54.218336  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:54.220411  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:54.483652  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:54.596251  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:54.719528  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:54.720220  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:55.232368  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:55.232724  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:55.233181  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:55.233391  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:55.481929  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:55.595861  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:55.718543  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:55.718985  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:55.983911  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:56.095903  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:56.220860  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:56.221898  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:56.482778  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:56.595842  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:56.718847  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:56.719100  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:56.982103  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:57.095564  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:57.218351  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:57.218561  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:57.482555  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:57.595003  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:57.719076  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:57.719278  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:57.983394  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:58.095258  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:58.218602  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:58.219149  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:58.482754  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:58.595355  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:58.719161  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:58.719321  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:58.981879  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:59.095291  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:59.219381  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:59.220178  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:59.482616  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:37:59.596295  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:37:59.719294  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:37:59.719426  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:37:59.993620  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:00.096081  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:00.219485  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:38:00.219841  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:00.482663  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:00.595396  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:00.720126  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:38:00.720694  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:00.992204  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:01.097761  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:01.219498  245557 kapi.go:107] duration metric: took 57.005316247s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 17:38:01.220075  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:01.484002  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:01.595128  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:01.719104  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:02.136461  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:02.137748  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:02.239284  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:02.484059  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:02.597924  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:02.718800  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:02.982988  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:03.095774  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:03.226940  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:03.482947  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:03.595687  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:03.718635  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:03.982370  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:04.102128  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:04.219301  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:04.483127  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:04.595106  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:04.719141  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:04.981631  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:05.101287  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:05.219055  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:05.482258  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:05.595242  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:05.718751  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:05.982406  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:06.106689  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:06.221343  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:06.482771  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:06.594811  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:06.719092  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:06.981985  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:07.097061  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:07.219541  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:07.483408  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:07.595174  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:07.719181  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:07.982038  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:08.095412  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:08.220499  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:08.484258  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:08.595950  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:08.718848  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:08.983659  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:09.095507  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:09.223029  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:09.486835  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:09.599413  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:09.719800  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:09.982147  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:10.095511  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:10.669481  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:10.669715  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:10.669778  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:10.753003  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:10.984155  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:11.096392  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:11.226061  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:11.482229  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:11.595481  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:11.719332  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:11.982116  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:12.095541  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:12.221657  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:12.482601  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:12.595013  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:12.731224  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:12.982914  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:13.095203  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:13.220342  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:13.483110  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:13.598709  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:13.718995  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:13.983441  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:14.094805  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:14.225305  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:14.482669  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:14.596239  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:14.720831  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:14.982905  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:15.095677  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:15.218902  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:15.482688  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:15.595271  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:15.752797  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:15.982814  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:16.095989  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:16.218789  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:16.482153  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:16.595532  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:16.718428  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:16.982631  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:17.095161  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:17.218895  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:17.482903  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:17.595571  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:17.720206  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:17.981706  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:18.095526  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:18.221214  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:18.483135  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:18.595497  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:18.723739  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:18.983544  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:19.096959  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:19.218551  245557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:38:19.482638  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:19.595067  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:19.718890  245557 kapi.go:107] duration metric: took 1m15.504703683s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 17:38:19.986184  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:20.098494  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:20.482419  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:20.595350  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:20.984285  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:21.095405  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:21.482801  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:21.595705  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:21.982482  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:22.095811  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:22.482263  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:22.595985  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:22.983166  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:23.095955  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:23.482802  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:23.607423  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:23.983139  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:24.095964  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:24.482710  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:24.595867  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:24.982831  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:25.095296  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:38:25.485376  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:25.602264  245557 kapi.go:107] duration metric: took 1m18.510746029s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 17:38:25.604077  245557 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-679190 cluster.
	I0920 17:38:25.605455  245557 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 17:38:25.607126  245557 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 17:38:25.983952  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:26.489199  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:26.984440  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:27.486001  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:27.982356  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:28.481673  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:28.985677  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:29.483232  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:29.981588  245557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:38:30.486914  245557 kapi.go:107] duration metric: took 1m25.009495563s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 17:38:30.489426  245557 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, inspektor-gadget, nvidia-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0920 17:38:30.491035  245557 addons.go:510] duration metric: took 1m34.313356496s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner inspektor-gadget nvidia-device-plugin metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0920 17:38:30.491107  245557 start.go:246] waiting for cluster config update ...
	I0920 17:38:30.491135  245557 start.go:255] writing updated cluster config ...
	I0920 17:38:30.491469  245557 ssh_runner.go:195] Run: rm -f paused
	I0920 17:38:30.547139  245557 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 17:38:30.549058  245557 out.go:177] * Done! kubectl is now configured to use "addons-679190" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.802177634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854756802153519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8f1126f-1f99-496e-aecc-7059378db90a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.802859713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21eacacb-7311-4d12-bfda-01e8fd7907e4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.802958957Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21eacacb-7311-4d12-bfda-01e8fd7907e4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.803407736Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a7564befb2262f7ccc92bee22dd03bfb963dd2b593f58cd61501d7aaa6eeb97,PodSandboxId:f9cf5175fff467a5d0e91ee68f7d277bf862a45e11cb39fffb6ee3614ead9923,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726854581495856579,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-xfq9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e1ea699-e231-467a-a0d1-75143d1036b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19b33ea2ca1e95f3cf6352959a33b4097f8b4afb2a99285d44c77b250f277153,PodSandboxId:baa7b5cea9fa13bed223540120ceec73698806159aa49c33bd266d18b3ec5d0b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726854442139284452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 719ce5c1-7853-4fc9-8fd3-7725aba7ed0c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0b4062645fc2c1c8c82cdce410360489fe6600fb411ea6d60c712a5c12813f,PodSandboxId:5339d36289fab846d220fa9edfa1af3bbc0ffda6cd68845caeceb1aa176d74b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726853904701573245,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-58447,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 6925f58e-54c8-43f8-893e-4ff8a6a84707,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f37f4284c136c5a93b735b37ed0979ddbd084e8586efb69fd840601eea6e9b2d,PodSandboxId:0e75080753c376351238a32172857b9850af59e0803ed5f26aa7a13beb06c7fe,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726853848562042456,Labels:map[string]string{io.kubernetes.container.name: l
ocal-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-vcxc2,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 7fc94b7c-a858-4af7-9355-2a81abf00a96,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33bb418967f323981293619c300cd658dfb1efb2617606f6872ec9443138d8d4,PodSandboxId:d86e657235d1aee688b3d4777827dc899fbf0d085c23dbc7f847861897fb0987,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726853845
803445826,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-fj4mf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb63308-9d43-444e-b31b-a5efeef5d323,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c860a4c507c477109d4747b1b074c68a4a31b9aeee2cfc9591edda6f92a49c41,PodSandboxId:5c51ff7a22efee9bfed73a4683dfae61105461d71e774cd60d35b169d58701f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726853822536970769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339440d6-4355-4e26-a436-2edefb4d7b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56bab5bc8aac1da4daf358eadce72c458a49fba19d4c18106120004ede4b716,PodSandboxId:6cb2d547f55f043965d896f833e8e278cd9ea490c81edd68102d7c2c5eb333bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726853820362351318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dsxdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371b6ad-8f6e-4474-a677-f07c0b4e0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92cf3212ca3856ac30692de35be4bf7391dbf53d3b71d366bbd05e33353b54b5,PodSandboxId:28727162eae183907c195d9fd5223acf57032a1040947cea5dfa9b15cfe6dd47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726853817570982707,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klvxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6edcd5de-35eb-4e5b-8073-e2a49428b300,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f013b45bfd968d0cf23514647a630ce699e0ed7b9a36138115cce03563ebd0ef,PodSandboxId:fa1a7012d3ccb9a78a9cc3d8d35ee4a4aa883415cdd8fe1eda6bd57d5483df19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d
0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726853806327216995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a8f619038c0e1f5f5e421f1961f8a4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48780df679f85764217ee650d3268dfc7988e43dd065577ee1d4a41b3b94f2c,PodSandboxId:fd2c93f89c728325fe986e081ce5e22caf7056060693eac9660634d886e81823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a1
3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726853806316679007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc8d6b591a917dcaa84c49b09e7c78a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6b3339abef5b932a0a51290bacbf5ea276d9c2f651978f0fb128032a963ff0,PodSandboxId:fd6f30369eff5dcb42ee83f7ad74d1d0fa801c373f8920a44d3e06edba2e06d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726853806319373192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97255a7c0e075db6f5e083c1ea277628,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad81a4e34c0219e34fccc748356a548ec1f96c3939761411d1393005f3368bd,PodSandboxId:7b642a7bdd5a2c7525004ed61914d2fc58ca1f4c403f001bb0d898ef73e618c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726853806168616121,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548b4fadf5e5a756eea840e162d03eb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21eacacb-7311-4d12-bfda-01e8fd7907e4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.837447317Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7daf4035-cbb4-4afd-be2f-94953dad0aeb name=/runtime.v1.RuntimeService/Version
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.837537614Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7daf4035-cbb4-4afd-be2f-94953dad0aeb name=/runtime.v1.RuntimeService/Version
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.838799182Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5bf89b3-1184-4869-9ad6-120d08ea46b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.840217389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854756840189536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5bf89b3-1184-4869-9ad6-120d08ea46b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.841079484Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d17f6af-96f5-443c-a623-48db8f7c589a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.841176607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d17f6af-96f5-443c-a623-48db8f7c589a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.841507548Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a7564befb2262f7ccc92bee22dd03bfb963dd2b593f58cd61501d7aaa6eeb97,PodSandboxId:f9cf5175fff467a5d0e91ee68f7d277bf862a45e11cb39fffb6ee3614ead9923,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726854581495856579,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-xfq9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e1ea699-e231-467a-a0d1-75143d1036b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19b33ea2ca1e95f3cf6352959a33b4097f8b4afb2a99285d44c77b250f277153,PodSandboxId:baa7b5cea9fa13bed223540120ceec73698806159aa49c33bd266d18b3ec5d0b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726854442139284452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 719ce5c1-7853-4fc9-8fd3-7725aba7ed0c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0b4062645fc2c1c8c82cdce410360489fe6600fb411ea6d60c712a5c12813f,PodSandboxId:5339d36289fab846d220fa9edfa1af3bbc0ffda6cd68845caeceb1aa176d74b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726853904701573245,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-58447,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 6925f58e-54c8-43f8-893e-4ff8a6a84707,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f37f4284c136c5a93b735b37ed0979ddbd084e8586efb69fd840601eea6e9b2d,PodSandboxId:0e75080753c376351238a32172857b9850af59e0803ed5f26aa7a13beb06c7fe,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726853848562042456,Labels:map[string]string{io.kubernetes.container.name: l
ocal-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-vcxc2,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 7fc94b7c-a858-4af7-9355-2a81abf00a96,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33bb418967f323981293619c300cd658dfb1efb2617606f6872ec9443138d8d4,PodSandboxId:d86e657235d1aee688b3d4777827dc899fbf0d085c23dbc7f847861897fb0987,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726853845
803445826,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-fj4mf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb63308-9d43-444e-b31b-a5efeef5d323,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c860a4c507c477109d4747b1b074c68a4a31b9aeee2cfc9591edda6f92a49c41,PodSandboxId:5c51ff7a22efee9bfed73a4683dfae61105461d71e774cd60d35b169d58701f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726853822536970769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339440d6-4355-4e26-a436-2edefb4d7b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56bab5bc8aac1da4daf358eadce72c458a49fba19d4c18106120004ede4b716,PodSandboxId:6cb2d547f55f043965d896f833e8e278cd9ea490c81edd68102d7c2c5eb333bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726853820362351318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dsxdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371b6ad-8f6e-4474-a677-f07c0b4e0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92cf3212ca3856ac30692de35be4bf7391dbf53d3b71d366bbd05e33353b54b5,PodSandboxId:28727162eae183907c195d9fd5223acf57032a1040947cea5dfa9b15cfe6dd47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726853817570982707,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klvxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6edcd5de-35eb-4e5b-8073-e2a49428b300,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f013b45bfd968d0cf23514647a630ce699e0ed7b9a36138115cce03563ebd0ef,PodSandboxId:fa1a7012d3ccb9a78a9cc3d8d35ee4a4aa883415cdd8fe1eda6bd57d5483df19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d
0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726853806327216995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a8f619038c0e1f5f5e421f1961f8a4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48780df679f85764217ee650d3268dfc7988e43dd065577ee1d4a41b3b94f2c,PodSandboxId:fd2c93f89c728325fe986e081ce5e22caf7056060693eac9660634d886e81823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a1
3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726853806316679007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc8d6b591a917dcaa84c49b09e7c78a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6b3339abef5b932a0a51290bacbf5ea276d9c2f651978f0fb128032a963ff0,PodSandboxId:fd6f30369eff5dcb42ee83f7ad74d1d0fa801c373f8920a44d3e06edba2e06d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726853806319373192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97255a7c0e075db6f5e083c1ea277628,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad81a4e34c0219e34fccc748356a548ec1f96c3939761411d1393005f3368bd,PodSandboxId:7b642a7bdd5a2c7525004ed61914d2fc58ca1f4c403f001bb0d898ef73e618c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726853806168616121,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548b4fadf5e5a756eea840e162d03eb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d17f6af-96f5-443c-a623-48db8f7c589a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.887775595Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0c6b829-c5dd-4ae3-b1db-c6e65637446d name=/runtime.v1.RuntimeService/Version
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.887853825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0c6b829-c5dd-4ae3-b1db-c6e65637446d name=/runtime.v1.RuntimeService/Version
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.889411856Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3dc4d66e-f54a-42d6-ad8a-9e2324a5a5b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.890626720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854756890597434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3dc4d66e-f54a-42d6-ad8a-9e2324a5a5b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.891245335Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54cda28e-0514-4a6d-8d48-0558ab258408 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.891299801Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54cda28e-0514-4a6d-8d48-0558ab258408 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.891574471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a7564befb2262f7ccc92bee22dd03bfb963dd2b593f58cd61501d7aaa6eeb97,PodSandboxId:f9cf5175fff467a5d0e91ee68f7d277bf862a45e11cb39fffb6ee3614ead9923,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726854581495856579,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-xfq9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e1ea699-e231-467a-a0d1-75143d1036b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19b33ea2ca1e95f3cf6352959a33b4097f8b4afb2a99285d44c77b250f277153,PodSandboxId:baa7b5cea9fa13bed223540120ceec73698806159aa49c33bd266d18b3ec5d0b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726854442139284452,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 719ce5c1-7853-4fc9-8fd3-7725aba7ed0c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0b4062645fc2c1c8c82cdce410360489fe6600fb411ea6d60c712a5c12813f,PodSandboxId:5339d36289fab846d220fa9edfa1af3bbc0ffda6cd68845caeceb1aa176d74b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726853904701573245,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-58447,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 6925f58e-54c8-43f8-893e-4ff8a6a84707,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f37f4284c136c5a93b735b37ed0979ddbd084e8586efb69fd840601eea6e9b2d,PodSandboxId:0e75080753c376351238a32172857b9850af59e0803ed5f26aa7a13beb06c7fe,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726853848562042456,Labels:map[string]string{io.kubernetes.container.name: l
ocal-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-vcxc2,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 7fc94b7c-a858-4af7-9355-2a81abf00a96,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33bb418967f323981293619c300cd658dfb1efb2617606f6872ec9443138d8d4,PodSandboxId:d86e657235d1aee688b3d4777827dc899fbf0d085c23dbc7f847861897fb0987,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726853845
803445826,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-fj4mf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb63308-9d43-444e-b31b-a5efeef5d323,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c860a4c507c477109d4747b1b074c68a4a31b9aeee2cfc9591edda6f92a49c41,PodSandboxId:5c51ff7a22efee9bfed73a4683dfae61105461d71e774cd60d35b169d58701f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726853822536970769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339440d6-4355-4e26-a436-2edefb4d7b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56bab5bc8aac1da4daf358eadce72c458a49fba19d4c18106120004ede4b716,PodSandboxId:6cb2d547f55f043965d896f833e8e278cd9ea490c81edd68102d7c2c5eb333bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726853820362351318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dsxdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3371b6ad-8f6e-4474-a677-f07c0b4e0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92cf3212ca3856ac30692de35be4bf7391dbf53d3b71d366bbd05e33353b54b5,PodSandboxId:28727162eae183907c195d9fd5223acf57032a1040947cea5dfa9b15cfe6dd47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726853817570982707,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klvxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6edcd5de-35eb-4e5b-8073-e2a49428b300,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f013b45bfd968d0cf23514647a630ce699e0ed7b9a36138115cce03563ebd0ef,PodSandboxId:fa1a7012d3ccb9a78a9cc3d8d35ee4a4aa883415cdd8fe1eda6bd57d5483df19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d
0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726853806327216995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53a8f619038c0e1f5f5e421f1961f8a4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48780df679f85764217ee650d3268dfc7988e43dd065577ee1d4a41b3b94f2c,PodSandboxId:fd2c93f89c728325fe986e081ce5e22caf7056060693eac9660634d886e81823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a1
3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726853806316679007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc8d6b591a917dcaa84c49b09e7c78a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6b3339abef5b932a0a51290bacbf5ea276d9c2f651978f0fb128032a963ff0,PodSandboxId:fd6f30369eff5dcb42ee83f7ad74d1d0fa801c373f8920a44d3e06edba2e06d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726853806319373192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97255a7c0e075db6f5e083c1ea277628,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad81a4e34c0219e34fccc748356a548ec1f96c3939761411d1393005f3368bd,PodSandboxId:7b642a7bdd5a2c7525004ed61914d2fc58ca1f4c403f001bb0d898ef73e618c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726853806168616121,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-679190,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548b4fadf5e5a756eea840e162d03eb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54cda28e-0514-4a6d-8d48-0558ab258408 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.904144985Z" level=debug msg="Event: WRITE         \"/var/run/crio/exits/33bb418967f323981293619c300cd658dfb1efb2617606f6872ec9443138d8d4.W16VT2\"" file="server/server.go:805"
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.904202867Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/33bb418967f323981293619c300cd658dfb1efb2617606f6872ec9443138d8d4.W16VT2\"" file="server/server.go:805"
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.904230435Z" level=debug msg="Container or sandbox exited: 33bb418967f323981293619c300cd658dfb1efb2617606f6872ec9443138d8d4.W16VT2" file="server/server.go:810"
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.904261658Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/33bb418967f323981293619c300cd658dfb1efb2617606f6872ec9443138d8d4\"" file="server/server.go:805"
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.904279572Z" level=debug msg="Container or sandbox exited: 33bb418967f323981293619c300cd658dfb1efb2617606f6872ec9443138d8d4" file="server/server.go:810"
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.904300440Z" level=debug msg="container exited and found: 33bb418967f323981293619c300cd658dfb1efb2617606f6872ec9443138d8d4" file="server/server.go:825"
	Sep 20 17:52:36 addons-679190 crio[661]: time="2024-09-20 17:52:36.904380509Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/33bb418967f323981293619c300cd658dfb1efb2617606f6872ec9443138d8d4.W16VT2\"" file="server/server.go:805"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0a7564befb226       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   f9cf5175fff46       hello-world-app-55bf9c44b4-xfq9d
	19b33ea2ca1e9       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         5 minutes ago       Running             nginx                     0                   baa7b5cea9fa1       nginx
	ec0b4062645fc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            14 minutes ago      Running             gcp-auth                  0                   5339d36289fab       gcp-auth-89d5ffd79-58447
	f37f4284c136c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        15 minutes ago      Running             local-path-provisioner    0                   0e75080753c37       local-path-provisioner-86d989889c-vcxc2
	33bb418967f32       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   15 minutes ago      Exited              metrics-server            0                   d86e657235d1a       metrics-server-84c5f94fbc-fj4mf
	c860a4c507c47       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   5c51ff7a22efe       storage-provisioner
	e56bab5bc8aac       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        15 minutes ago      Running             coredns                   0                   6cb2d547f55f0       coredns-7c65d6cfc9-dsxdk
	92cf3212ca385       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago      Running             kube-proxy                0                   28727162eae18       kube-proxy-klvxz
	f013b45bfd968       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   fa1a7012d3ccb       etcd-addons-679190
	6b6b3339abef5       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago      Running             kube-controller-manager   0                   fd6f30369eff5       kube-controller-manager-addons-679190
	b48780df679f8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago      Running             kube-scheduler            0                   fd2c93f89c728       kube-scheduler-addons-679190
	1ad81a4e34c02       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago      Running             kube-apiserver            0                   7b642a7bdd5a2       kube-apiserver-addons-679190
	
	
	==> coredns [e56bab5bc8aac1da4daf358eadce72c458a49fba19d4c18106120004ede4b716] <==
	[INFO] 10.244.0.6:43873 - 60318 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000102028s
	[INFO] 10.244.0.6:49634 - 1642 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000165365s
	[INFO] 10.244.0.6:49634 - 14697 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000162336s
	[INFO] 10.244.0.6:51034 - 55160 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094365s
	[INFO] 10.244.0.6:51034 - 10106 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065072s
	[INFO] 10.244.0.6:60315 - 40487 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043683s
	[INFO] 10.244.0.6:60315 - 43321 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084501s
	[INFO] 10.244.0.6:35891 - 53873 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000042787s
	[INFO] 10.244.0.6:35891 - 34419 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000033205s
	[INFO] 10.244.0.6:33447 - 25628 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150214s
	[INFO] 10.244.0.6:33447 - 35042 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000039143s
	[INFO] 10.244.0.6:40512 - 59984 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083424s
	[INFO] 10.244.0.6:40512 - 63570 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038001s
	[INFO] 10.244.0.6:45833 - 63289 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052986s
	[INFO] 10.244.0.6:45833 - 1087 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027214s
	[INFO] 10.244.0.6:44945 - 60461 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000039073s
	[INFO] 10.244.0.6:44945 - 33323 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000036767s
	[INFO] 10.244.0.21:44980 - 55236 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000537238s
	[INFO] 10.244.0.21:57739 - 29484 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00030418s
	[INFO] 10.244.0.21:36936 - 49312 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000192738s
	[INFO] 10.244.0.21:55426 - 11322 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000253353s
	[INFO] 10.244.0.21:57850 - 37730 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000183379s
	[INFO] 10.244.0.21:53881 - 17609 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00016102s
	[INFO] 10.244.0.21:43516 - 53661 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001411255s
	[INFO] 10.244.0.21:49825 - 60779 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000963325s
	
	
	==> describe nodes <==
	Name:               addons-679190
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-679190
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=addons-679190
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_36_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-679190
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:36:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-679190
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:52:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:49:56 +0000   Fri, 20 Sep 2024 17:36:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:49:56 +0000   Fri, 20 Sep 2024 17:36:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:49:56 +0000   Fri, 20 Sep 2024 17:36:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:49:56 +0000   Fri, 20 Sep 2024 17:36:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.158
	  Hostname:    addons-679190
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 be44d0789ac247d0942761612c630a1f
	  System UUID:                be44d078-9ac2-47d0-9427-61612c630a1f
	  Boot ID:                    b2360fab-23fa-467c-99ca-2729b31c70c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-xfq9d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  gcp-auth                    gcp-auth-89d5ffd79-58447                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-7c65d6cfc9-dsxdk                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-679190                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-679190               250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-679190      200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-klvxz                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-679190               100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  local-path-storage          local-path-provisioner-86d989889c-vcxc2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node addons-679190 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node addons-679190 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node addons-679190 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m   kubelet          Node addons-679190 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node addons-679190 event: Registered Node addons-679190 in Controller
	
	
	==> dmesg <==
	[ +15.750642] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.216234] kauditd_printk_skb: 15 callbacks suppressed
	[  +9.065331] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.958265] kauditd_printk_skb: 4 callbacks suppressed
	[Sep20 17:38] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.888314] kauditd_printk_skb: 42 callbacks suppressed
	[ +10.053683] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.312437] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.622571] kauditd_printk_skb: 54 callbacks suppressed
	[ +26.074732] kauditd_printk_skb: 13 callbacks suppressed
	[Sep20 17:39] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 17:41] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 17:43] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 17:46] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.504877] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.087829] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.196655] kauditd_printk_skb: 59 callbacks suppressed
	[  +7.642956] kauditd_printk_skb: 1 callbacks suppressed
	[Sep20 17:47] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.859221] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.845694] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.862171] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.689570] kauditd_printk_skb: 33 callbacks suppressed
	[Sep20 17:49] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.194903] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [f013b45bfd968d0cf23514647a630ce699e0ed7b9a36138115cce03563ebd0ef] <==
	{"level":"warn","ts":"2024-09-20T17:38:10.652384Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.30745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-20T17:38:10.652414Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"327.397124ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:38:10.652445Z","caller":"traceutil/trace.go:171","msg":"trace[1075324407] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1040; }","duration":"327.426462ms","start":"2024-09-20T17:38:10.325013Z","end":"2024-09-20T17:38:10.652439Z","steps":["trace[1075324407] 'agreement among raft nodes before linearized reading'  (duration: 327.387735ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:38:10.652430Z","caller":"traceutil/trace.go:171","msg":"trace[133845077] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1040; }","duration":"184.35459ms","start":"2024-09-20T17:38:10.468068Z","end":"2024-09-20T17:38:10.652423Z","steps":["trace[133845077] 'agreement among raft nodes before linearized reading'  (duration: 184.292099ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:38:10.653015Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.844009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:38:10.653104Z","caller":"traceutil/trace.go:171","msg":"trace[209721001] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; response_count:0; response_revision:1040; }","duration":"175.93602ms","start":"2024-09-20T17:38:10.477160Z","end":"2024-09-20T17:38:10.653096Z","steps":["trace[209721001] 'agreement among raft nodes before linearized reading'  (duration: 175.83519ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:38:10.653637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"380.604231ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-09-20T17:38:10.653728Z","caller":"traceutil/trace.go:171","msg":"trace[747134196] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1040; }","duration":"380.696537ms","start":"2024-09-20T17:38:10.273023Z","end":"2024-09-20T17:38:10.653720Z","steps":["trace[747134196] 'agreement among raft nodes before linearized reading'  (duration: 380.53387ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:38:10.653804Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:38:10.272989Z","time spent":"380.80517ms","remote":"127.0.0.1:49862","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":170,"response size":31,"request content":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true "}
	{"level":"info","ts":"2024-09-20T17:39:02.358224Z","caller":"traceutil/trace.go:171","msg":"trace[1180894535] transaction","detail":"{read_only:false; response_revision:1253; number_of_response:1; }","duration":"124.748523ms","start":"2024-09-20T17:39:02.233450Z","end":"2024-09-20T17:39:02.358199Z","steps":["trace[1180894535] 'process raft request'  (duration: 124.628143ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:46:47.805022Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1506}
	{"level":"info","ts":"2024-09-20T17:46:47.839955Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1506,"took":"34.098256ms","hash":2552434771,"current-db-size-bytes":7012352,"current-db-size":"7.0 MB","current-db-size-in-use-bytes":3891200,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2024-09-20T17:46:47.840019Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2552434771,"revision":1506,"compact-revision":-1}
	{"level":"warn","ts":"2024-09-20T17:46:50.466442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.994871ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:46:50.466495Z","caller":"traceutil/trace.go:171","msg":"trace[1650142898] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2057; }","duration":"139.081796ms","start":"2024-09-20T17:46:50.327400Z","end":"2024-09-20T17:46:50.466482Z","steps":["trace[1650142898] 'range keys from in-memory index tree'  (duration: 138.980039ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:46:50.466583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.518533ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:46:50.466595Z","caller":"traceutil/trace.go:171","msg":"trace[2129239253] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2057; }","duration":"128.546109ms","start":"2024-09-20T17:46:50.338045Z","end":"2024-09-20T17:46:50.466591Z","steps":["trace[2129239253] 'range keys from in-memory index tree'  (duration: 128.454845ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:46:52.057090Z","caller":"traceutil/trace.go:171","msg":"trace[2010592287] transaction","detail":"{read_only:false; response_revision:2066; number_of_response:1; }","duration":"142.531828ms","start":"2024-09-20T17:46:51.914533Z","end":"2024-09-20T17:46:52.057065Z","steps":["trace[2010592287] 'process raft request'  (duration: 142.444566ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:46:52.274053Z","caller":"traceutil/trace.go:171","msg":"trace[1207827034] transaction","detail":"{read_only:false; response_revision:2067; number_of_response:1; }","duration":"307.822812ms","start":"2024-09-20T17:46:51.966218Z","end":"2024-09-20T17:46:52.274041Z","steps":["trace[1207827034] 'process raft request'  (duration: 307.159868ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:46:52.276791Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:46:51.966199Z","time spent":"310.498319ms","remote":"127.0.0.1:50070","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-679190\" mod_revision:1964 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-679190\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-679190\" > >"}
	{"level":"info","ts":"2024-09-20T17:47:05.086002Z","caller":"traceutil/trace.go:171","msg":"trace[426642049] transaction","detail":"{read_only:false; response_revision:2125; number_of_response:1; }","duration":"360.035966ms","start":"2024-09-20T17:47:04.725950Z","end":"2024-09-20T17:47:05.085986Z","steps":["trace[426642049] 'process raft request'  (duration: 359.86959ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:47:05.086166Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:47:04.725893Z","time spent":"360.202664ms","remote":"127.0.0.1:49862","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":764,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-zvp8w.17f704774ef9a14d\" mod_revision:1538 > success:<request_put:<key:\"/registry/events/gadget/gadget-zvp8w.17f704774ef9a14d\" value_size:693 lease:8396277679900029684 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-zvp8w.17f704774ef9a14d\" > >"}
	{"level":"info","ts":"2024-09-20T17:51:47.813579Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2052}
	{"level":"info","ts":"2024-09-20T17:51:47.836817Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2052,"took":"22.283654ms","hash":395640825,"current-db-size-bytes":7012352,"current-db-size":"7.0 MB","current-db-size-in-use-bytes":4759552,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-20T17:51:47.836945Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":395640825,"revision":2052,"compact-revision":1506}
	
	
	==> gcp-auth [ec0b4062645fc2c1c8c82cdce410360489fe6600fb411ea6d60c712a5c12813f] <==
	2024/09/20 17:38:30 Ready to write response ...
	2024/09/20 17:38:30 Ready to marshal response ...
	2024/09/20 17:38:30 Ready to write response ...
	2024/09/20 17:46:33 Ready to marshal response ...
	2024/09/20 17:46:33 Ready to write response ...
	2024/09/20 17:46:33 Ready to marshal response ...
	2024/09/20 17:46:33 Ready to write response ...
	2024/09/20 17:46:44 Ready to marshal response ...
	2024/09/20 17:46:44 Ready to write response ...
	2024/09/20 17:46:46 Ready to marshal response ...
	2024/09/20 17:46:46 Ready to write response ...
	2024/09/20 17:46:46 Ready to marshal response ...
	2024/09/20 17:46:46 Ready to write response ...
	2024/09/20 17:46:46 Ready to marshal response ...
	2024/09/20 17:46:46 Ready to write response ...
	2024/09/20 17:46:46 Ready to marshal response ...
	2024/09/20 17:46:46 Ready to write response ...
	2024/09/20 17:46:58 Ready to marshal response ...
	2024/09/20 17:46:58 Ready to write response ...
	2024/09/20 17:47:17 Ready to marshal response ...
	2024/09/20 17:47:17 Ready to write response ...
	2024/09/20 17:47:30 Ready to marshal response ...
	2024/09/20 17:47:30 Ready to write response ...
	2024/09/20 17:49:38 Ready to marshal response ...
	2024/09/20 17:49:38 Ready to write response ...
	
	
	==> kernel <==
	 17:52:37 up 16 min,  0 users,  load average: 0.06, 0.24, 0.28
	Linux addons-679190 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1ad81a4e34c0219e34fccc748356a548ec1f96c3939761411d1393005f3368bd] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 17:38:30.790878       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.117.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.117.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.117.29:443: connect: connection refused" logger="UnhandledError"
	I0920 17:38:30.830638       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0920 17:46:46.684183       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.95.13"}
	I0920 17:47:12.280076       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0920 17:47:12.410157       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	W0920 17:47:13.322883       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 17:47:17.844287       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 17:47:18.049415       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.232.246"}
	I0920 17:47:46.449796       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 17:47:46.449837       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 17:47:46.476080       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 17:47:46.476629       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 17:47:46.493047       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 17:47:46.493097       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 17:47:46.524567       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 17:47:46.524602       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 17:47:46.579848       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 17:47:46.579981       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 17:47:47.491165       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 17:47:47.581041       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0920 17:47:47.634102       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0920 17:49:38.828888       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.91.183"}
	
	
	==> kube-controller-manager [6b6b3339abef5b932a0a51290bacbf5ea276d9c2f651978f0fb128032a963ff0] <==
	W0920 17:50:13.582406       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:50:13.582613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:50:33.780953       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:50:33.781180       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:50:41.146409       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:50:41.146505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:50:47.927577       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:50:47.927645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:51:09.613543       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:51:09.613668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:51:23.600769       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:51:23.600848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:51:24.353555       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:51:24.353659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:51:30.504653       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:51:30.504823       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:51:53.588433       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:51:53.588497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:52:08.675234       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:52:08.675299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:52:13.299126       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:52:13.299284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:52:13.303073       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:52:13.303126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 17:52:35.782292       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="11.608µs"
	
	
	==> kube-proxy [92cf3212ca3856ac30692de35be4bf7391dbf53d3b71d366bbd05e33353b54b5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:36:58.351748       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:36:58.367671       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.158"]
	E0920 17:36:58.367735       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:36:58.429527       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:36:58.429561       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:36:58.429586       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:36:58.435086       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:36:58.435354       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:36:58.435365       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:36:58.438303       1 config.go:199] "Starting service config controller"
	I0920 17:36:58.438316       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:36:58.438348       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:36:58.438352       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:36:58.441613       1 config.go:328] "Starting node config controller"
	I0920 17:36:58.441622       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:36:58.539182       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 17:36:58.539245       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:36:58.541947       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b48780df679f85764217ee650d3268dfc7988e43dd065577ee1d4a41b3b94f2c] <==
	W0920 17:36:49.950209       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:36:49.950280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:49.987691       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 17:36:49.987738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.025498       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 17:36:50.025544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.093076       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 17:36:50.093121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.124089       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:36:50.124138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.155077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 17:36:50.155234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.155746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 17:36:50.155937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.223176       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:36:50.223220       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.247365       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:36:50.247426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.317547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 17:36:50.317600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.383564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:36:50.383615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:36:50.534598       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 17:36:50.534719       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 17:36:53.562785       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 17:51:50 addons-679190 kubelet[1208]: E0920 17:51:50.724886    1208 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="bc41ca01-d6e4-43a6-830f-a3cac4fe89d6"
	Sep 20 17:51:51 addons-679190 kubelet[1208]: E0920 17:51:51.740425    1208 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 17:51:51 addons-679190 kubelet[1208]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 17:51:51 addons-679190 kubelet[1208]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 17:51:51 addons-679190 kubelet[1208]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 17:51:51 addons-679190 kubelet[1208]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 17:51:52 addons-679190 kubelet[1208]: E0920 17:51:52.267815    1208 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854712267259048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:51:52 addons-679190 kubelet[1208]: E0920 17:51:52.267846    1208 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854712267259048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:52:01 addons-679190 kubelet[1208]: E0920 17:52:01.724982    1208 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="bc41ca01-d6e4-43a6-830f-a3cac4fe89d6"
	Sep 20 17:52:02 addons-679190 kubelet[1208]: E0920 17:52:02.271020    1208 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854722270617423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:52:02 addons-679190 kubelet[1208]: E0920 17:52:02.271198    1208 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854722270617423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:52:12 addons-679190 kubelet[1208]: E0920 17:52:12.274845    1208 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854732274290254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:52:12 addons-679190 kubelet[1208]: E0920 17:52:12.274884    1208 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854732274290254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:52:12 addons-679190 kubelet[1208]: E0920 17:52:12.723654    1208 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="bc41ca01-d6e4-43a6-830f-a3cac4fe89d6"
	Sep 20 17:52:22 addons-679190 kubelet[1208]: E0920 17:52:22.278428    1208 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854742277813876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:52:22 addons-679190 kubelet[1208]: E0920 17:52:22.278733    1208 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854742277813876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:52:27 addons-679190 kubelet[1208]: E0920 17:52:27.723968    1208 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="bc41ca01-d6e4-43a6-830f-a3cac4fe89d6"
	Sep 20 17:52:32 addons-679190 kubelet[1208]: E0920 17:52:32.282184    1208 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854752281793209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:52:32 addons-679190 kubelet[1208]: E0920 17:52:32.282227    1208 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726854752281793209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 17:52:37 addons-679190 kubelet[1208]: I0920 17:52:37.156530    1208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pfjj\" (UniqueName: \"kubernetes.io/projected/adb63308-9d43-444e-b31b-a5efeef5d323-kube-api-access-9pfjj\") pod \"adb63308-9d43-444e-b31b-a5efeef5d323\" (UID: \"adb63308-9d43-444e-b31b-a5efeef5d323\") "
	Sep 20 17:52:37 addons-679190 kubelet[1208]: I0920 17:52:37.156595    1208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/adb63308-9d43-444e-b31b-a5efeef5d323-tmp-dir\") pod \"adb63308-9d43-444e-b31b-a5efeef5d323\" (UID: \"adb63308-9d43-444e-b31b-a5efeef5d323\") "
	Sep 20 17:52:37 addons-679190 kubelet[1208]: I0920 17:52:37.157011    1208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adb63308-9d43-444e-b31b-a5efeef5d323-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "adb63308-9d43-444e-b31b-a5efeef5d323" (UID: "adb63308-9d43-444e-b31b-a5efeef5d323"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 20 17:52:37 addons-679190 kubelet[1208]: I0920 17:52:37.161651    1208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb63308-9d43-444e-b31b-a5efeef5d323-kube-api-access-9pfjj" (OuterVolumeSpecName: "kube-api-access-9pfjj") pod "adb63308-9d43-444e-b31b-a5efeef5d323" (UID: "adb63308-9d43-444e-b31b-a5efeef5d323"). InnerVolumeSpecName "kube-api-access-9pfjj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 17:52:37 addons-679190 kubelet[1208]: I0920 17:52:37.257416    1208 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9pfjj\" (UniqueName: \"kubernetes.io/projected/adb63308-9d43-444e-b31b-a5efeef5d323-kube-api-access-9pfjj\") on node \"addons-679190\" DevicePath \"\""
	Sep 20 17:52:37 addons-679190 kubelet[1208]: I0920 17:52:37.257461    1208 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/adb63308-9d43-444e-b31b-a5efeef5d323-tmp-dir\") on node \"addons-679190\" DevicePath \"\""
	
	
	==> storage-provisioner [c860a4c507c477109d4747b1b074c68a4a31b9aeee2cfc9591edda6f92a49c41] <==
	I0920 17:37:04.229131       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 17:37:04.316870       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 17:37:04.316984       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 17:37:04.369383       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 17:37:04.369574       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-679190_755bbb3d-4dd9-4398-8fc4-ca84bbdfa577!
	I0920 17:37:04.369640       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1220de9-2330-4b06-bc0f-6bb70dd8d11a", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-679190_755bbb3d-4dd9-4398-8fc4-ca84bbdfa577 became leader
	I0920 17:37:04.470051       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-679190_755bbb3d-4dd9-4398-8fc4-ca84bbdfa577!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-679190 -n addons-679190
helpers_test.go:261: (dbg) Run:  kubectl --context addons-679190 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-679190 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-679190 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-679190/192.168.39.158
	Start Time:       Fri, 20 Sep 2024 17:38:30 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n77bn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n77bn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  14m                  default-scheduler  Successfully assigned default/busybox to addons-679190
	  Normal   Pulling    12m (x4 over 14m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 14m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 14m)    kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 14m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m3s (x42 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (350.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p functional-024386 image ls --format short --alsologtostderr: (2.307819033s)
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-024386 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-024386 image ls --format short --alsologtostderr:
I0920 17:57:57.048120  256126 out.go:345] Setting OutFile to fd 1 ...
I0920 17:57:57.048526  256126 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:57:57.048545  256126 out.go:358] Setting ErrFile to fd 2...
I0920 17:57:57.048553  256126 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:57:57.049002  256126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
I0920 17:57:57.054998  256126 config.go:182] Loaded profile config "functional-024386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:57:57.055204  256126 config.go:182] Loaded profile config "functional-024386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:57:57.055775  256126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:57:57.055827  256126 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:57:57.072861  256126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
I0920 17:57:57.073592  256126 main.go:141] libmachine: () Calling .GetVersion
I0920 17:57:57.074334  256126 main.go:141] libmachine: Using API Version  1
I0920 17:57:57.074366  256126 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:57:57.077298  256126 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:57:57.077581  256126 main.go:141] libmachine: (functional-024386) Calling .GetState
I0920 17:57:57.079743  256126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:57:57.079824  256126 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:57:57.103258  256126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
I0920 17:57:57.103783  256126 main.go:141] libmachine: () Calling .GetVersion
I0920 17:57:57.104476  256126 main.go:141] libmachine: Using API Version  1
I0920 17:57:57.104511  256126 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:57:57.104974  256126 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:57:57.105245  256126 main.go:141] libmachine: (functional-024386) Calling .DriverName
I0920 17:57:57.105504  256126 ssh_runner.go:195] Run: systemctl --version
I0920 17:57:57.105538  256126 main.go:141] libmachine: (functional-024386) Calling .GetSSHHostname
I0920 17:57:57.109117  256126 main.go:141] libmachine: (functional-024386) DBG | domain functional-024386 has defined MAC address 52:54:00:08:4e:4d in network mk-functional-024386
I0920 17:57:57.109628  256126 main.go:141] libmachine: (functional-024386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:4e:4d", ip: ""} in network mk-functional-024386: {Iface:virbr1 ExpiryTime:2024-09-20 18:55:16 +0000 UTC Type:0 Mac:52:54:00:08:4e:4d Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:functional-024386 Clientid:01:52:54:00:08:4e:4d}
I0920 17:57:57.109662  256126 main.go:141] libmachine: (functional-024386) DBG | domain functional-024386 has defined IP address 192.168.39.75 and MAC address 52:54:00:08:4e:4d in network mk-functional-024386
I0920 17:57:57.109834  256126 main.go:141] libmachine: (functional-024386) Calling .GetSSHPort
I0920 17:57:57.110016  256126 main.go:141] libmachine: (functional-024386) Calling .GetSSHKeyPath
I0920 17:57:57.110221  256126 main.go:141] libmachine: (functional-024386) Calling .GetSSHUsername
I0920 17:57:57.110366  256126 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/functional-024386/id_rsa Username:docker}
I0920 17:57:57.222482  256126 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 17:57:59.281128  256126 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.058596518s)
W0920 17:57:59.281254  256126 cache_images.go:734] Failed to list images for profile functional-024386 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E0920 17:57:59.269959    7806 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2024-09-20T17:57:59Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I0920 17:57:59.281335  256126 main.go:141] libmachine: Making call to close driver server
I0920 17:57:59.281352  256126 main.go:141] libmachine: (functional-024386) Calling .Close
I0920 17:57:59.281739  256126 main.go:141] libmachine: (functional-024386) DBG | Closing plugin on server side
I0920 17:57:59.281745  256126 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:57:59.281761  256126 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 17:57:59.281769  256126 main.go:141] libmachine: Making call to close driver server
I0920 17:57:59.281779  256126 main.go:141] libmachine: (functional-024386) Calling .Close
I0920 17:57:59.282061  256126 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:57:59.282086  256126 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 17:57:59.282104  256126 main.go:141] libmachine: (functional-024386) DBG | Closing plugin on server side
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024386 ssh pgrep buildkitd: exit status 1 (274.06664ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image build -t localhost/my-image:functional-024386 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-024386 image build -t localhost/my-image:functional-024386 testdata/build --alsologtostderr: (4.41875393s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-024386 image build -t localhost/my-image:functional-024386 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2959e2ca2c7
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-024386
--> 44bd1deef69
Successfully tagged localhost/my-image:functional-024386
44bd1deef69b64938ce4e46778e0535b81623c5cecc2481e15ebc88348391a8a
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-024386 image build -t localhost/my-image:functional-024386 testdata/build --alsologtostderr:
I0920 17:58:00.933886  256219 out.go:345] Setting OutFile to fd 1 ...
I0920 17:58:00.934087  256219 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:58:00.934106  256219 out.go:358] Setting ErrFile to fd 2...
I0920 17:58:00.934114  256219 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:58:00.934330  256219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
I0920 17:58:00.935011  256219 config.go:182] Loaded profile config "functional-024386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:58:00.935745  256219 config.go:182] Loaded profile config "functional-024386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:58:00.936342  256219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:58:00.936403  256219 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:58:00.952465  256219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
I0920 17:58:00.953055  256219 main.go:141] libmachine: () Calling .GetVersion
I0920 17:58:00.953754  256219 main.go:141] libmachine: Using API Version  1
I0920 17:58:00.953785  256219 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:58:00.954259  256219 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:58:00.954476  256219 main.go:141] libmachine: (functional-024386) Calling .GetState
I0920 17:58:00.956642  256219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:58:00.956687  256219 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:58:00.972527  256219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
I0920 17:58:00.973114  256219 main.go:141] libmachine: () Calling .GetVersion
I0920 17:58:00.973814  256219 main.go:141] libmachine: Using API Version  1
I0920 17:58:00.973850  256219 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:58:00.974198  256219 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:58:00.974426  256219 main.go:141] libmachine: (functional-024386) Calling .DriverName
I0920 17:58:00.974640  256219 ssh_runner.go:195] Run: systemctl --version
I0920 17:58:00.974673  256219 main.go:141] libmachine: (functional-024386) Calling .GetSSHHostname
I0920 17:58:00.978303  256219 main.go:141] libmachine: (functional-024386) DBG | domain functional-024386 has defined MAC address 52:54:00:08:4e:4d in network mk-functional-024386
I0920 17:58:00.978821  256219 main.go:141] libmachine: (functional-024386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:4e:4d", ip: ""} in network mk-functional-024386: {Iface:virbr1 ExpiryTime:2024-09-20 18:55:16 +0000 UTC Type:0 Mac:52:54:00:08:4e:4d Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:functional-024386 Clientid:01:52:54:00:08:4e:4d}
I0920 17:58:00.978851  256219 main.go:141] libmachine: (functional-024386) DBG | domain functional-024386 has defined IP address 192.168.39.75 and MAC address 52:54:00:08:4e:4d in network mk-functional-024386
I0920 17:58:00.979043  256219 main.go:141] libmachine: (functional-024386) Calling .GetSSHPort
I0920 17:58:00.979224  256219 main.go:141] libmachine: (functional-024386) Calling .GetSSHKeyPath
I0920 17:58:00.979414  256219 main.go:141] libmachine: (functional-024386) Calling .GetSSHUsername
I0920 17:58:00.979744  256219 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/functional-024386/id_rsa Username:docker}
I0920 17:58:01.085800  256219 build_images.go:161] Building image from path: /tmp/build.2410265488.tar
I0920 17:58:01.085874  256219 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 17:58:01.106516  256219 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2410265488.tar
I0920 17:58:01.111386  256219 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2410265488.tar: stat -c "%s %y" /var/lib/minikube/build/build.2410265488.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2410265488.tar': No such file or directory
I0920 17:58:01.111430  256219 ssh_runner.go:362] scp /tmp/build.2410265488.tar --> /var/lib/minikube/build/build.2410265488.tar (3072 bytes)
I0920 17:58:01.140244  256219 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2410265488
I0920 17:58:01.150652  256219 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2410265488 -xf /var/lib/minikube/build/build.2410265488.tar
I0920 17:58:01.160752  256219 crio.go:315] Building image: /var/lib/minikube/build/build.2410265488
I0920 17:58:01.160852  256219 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-024386 /var/lib/minikube/build/build.2410265488 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0920 17:58:05.230359  256219 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-024386 /var/lib/minikube/build/build.2410265488 --cgroup-manager=cgroupfs: (4.069471975s)
I0920 17:58:05.230454  256219 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2410265488
I0920 17:58:05.263516  256219 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2410265488.tar
I0920 17:58:05.296077  256219 build_images.go:217] Built localhost/my-image:functional-024386 from /tmp/build.2410265488.tar
I0920 17:58:05.296120  256219 build_images.go:133] succeeded building to: functional-024386
I0920 17:58:05.296125  256219 build_images.go:134] failed building to: 
I0920 17:58:05.296154  256219 main.go:141] libmachine: Making call to close driver server
I0920 17:58:05.296167  256219 main.go:141] libmachine: (functional-024386) Calling .Close
I0920 17:58:05.296531  256219 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:58:05.296562  256219 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 17:58:05.296574  256219 main.go:141] libmachine: Making call to close driver server
I0920 17:58:05.296583  256219 main.go:141] libmachine: (functional-024386) Calling .Close
I0920 17:58:05.296945  256219 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:58:05.296963  256219 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 17:58:05.296978  256219 main.go:141] libmachine: (functional-024386) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image ls
functional_test.go:451: (dbg) Done: out/minikube-linux-amd64 -p functional-024386 image ls: (2.273975048s)
functional_test.go:446: expected "localhost/my-image:functional-024386" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (6.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 node stop m02 -v=7 --alsologtostderr
E0920 18:03:10.463115  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:03:30.941852  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:03:51.425334  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:03:58.644214  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-347193 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.510965813s)

                                                
                                                
-- stdout --
	* Stopping node "ha-347193-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:02:59.579740  260608 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:02:59.579935  260608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:02:59.579947  260608 out.go:358] Setting ErrFile to fd 2...
	I0920 18:02:59.579954  260608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:02:59.580159  260608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:02:59.580551  260608 mustload.go:65] Loading cluster: ha-347193
	I0920 18:02:59.581187  260608 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:02:59.581223  260608 stop.go:39] StopHost: ha-347193-m02
	I0920 18:02:59.581670  260608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:02:59.581729  260608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:02:59.599560  260608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41031
	I0920 18:02:59.600069  260608 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:02:59.600748  260608 main.go:141] libmachine: Using API Version  1
	I0920 18:02:59.600776  260608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:02:59.601293  260608 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:02:59.603972  260608 out.go:177] * Stopping node "ha-347193-m02"  ...
	I0920 18:02:59.605478  260608 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 18:02:59.605508  260608 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 18:02:59.605790  260608 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 18:02:59.605827  260608 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 18:02:59.608857  260608 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 18:02:59.609314  260608 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 18:02:59.609339  260608 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 18:02:59.609481  260608 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 18:02:59.609776  260608 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 18:02:59.609961  260608 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 18:02:59.610218  260608 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa Username:docker}
	I0920 18:02:59.703102  260608 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 18:02:59.761090  260608 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 18:02:59.815915  260608 main.go:141] libmachine: Stopping "ha-347193-m02"...
	I0920 18:02:59.815982  260608 main.go:141] libmachine: (ha-347193-m02) Calling .GetState
	I0920 18:02:59.817884  260608 main.go:141] libmachine: (ha-347193-m02) Calling .Stop
	I0920 18:02:59.822948  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 0/120
	I0920 18:03:00.825173  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 1/120
	I0920 18:03:01.826874  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 2/120
	I0920 18:03:02.828766  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 3/120
	I0920 18:03:03.830536  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 4/120
	I0920 18:03:04.832863  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 5/120
	I0920 18:03:05.834656  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 6/120
	I0920 18:03:06.836518  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 7/120
	I0920 18:03:07.837891  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 8/120
	I0920 18:03:08.839517  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 9/120
	I0920 18:03:09.841209  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 10/120
	I0920 18:03:10.843167  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 11/120
	I0920 18:03:11.844841  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 12/120
	I0920 18:03:12.846493  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 13/120
	I0920 18:03:13.848721  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 14/120
	I0920 18:03:14.850611  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 15/120
	I0920 18:03:15.852807  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 16/120
	I0920 18:03:16.854835  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 17/120
	I0920 18:03:17.856771  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 18/120
	I0920 18:03:18.858669  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 19/120
	I0920 18:03:19.861165  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 20/120
	I0920 18:03:20.862920  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 21/120
	I0920 18:03:21.864608  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 22/120
	I0920 18:03:22.866232  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 23/120
	I0920 18:03:23.868561  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 24/120
	I0920 18:03:24.870591  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 25/120
	I0920 18:03:25.872390  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 26/120
	I0920 18:03:26.873873  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 27/120
	I0920 18:03:27.875249  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 28/120
	I0920 18:03:28.876797  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 29/120
	I0920 18:03:29.878912  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 30/120
	I0920 18:03:30.880418  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 31/120
	I0920 18:03:31.882156  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 32/120
	I0920 18:03:32.884426  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 33/120
	I0920 18:03:33.885834  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 34/120
	I0920 18:03:34.888435  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 35/120
	I0920 18:03:35.890742  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 36/120
	I0920 18:03:36.892591  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 37/120
	I0920 18:03:37.894321  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 38/120
	I0920 18:03:38.896549  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 39/120
	I0920 18:03:39.898605  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 40/120
	I0920 18:03:40.900578  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 41/120
	I0920 18:03:41.902512  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 42/120
	I0920 18:03:42.904749  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 43/120
	I0920 18:03:43.906178  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 44/120
	I0920 18:03:44.908167  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 45/120
	I0920 18:03:45.909691  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 46/120
	I0920 18:03:46.911277  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 47/120
	I0920 18:03:47.912918  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 48/120
	I0920 18:03:48.914678  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 49/120
	I0920 18:03:49.916776  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 50/120
	I0920 18:03:50.918335  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 51/120
	I0920 18:03:51.920390  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 52/120
	I0920 18:03:52.921724  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 53/120
	I0920 18:03:53.923056  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 54/120
	I0920 18:03:54.924908  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 55/120
	I0920 18:03:55.927132  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 56/120
	I0920 18:03:56.928635  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 57/120
	I0920 18:03:57.930177  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 58/120
	I0920 18:03:58.932615  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 59/120
	I0920 18:03:59.935052  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 60/120
	I0920 18:04:00.936558  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 61/120
	I0920 18:04:01.938263  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 62/120
	I0920 18:04:02.940487  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 63/120
	I0920 18:04:03.942379  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 64/120
	I0920 18:04:04.944834  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 65/120
	I0920 18:04:05.946825  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 66/120
	I0920 18:04:06.948422  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 67/120
	I0920 18:04:07.950016  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 68/120
	I0920 18:04:08.951787  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 69/120
	I0920 18:04:09.953636  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 70/120
	I0920 18:04:10.955305  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 71/120
	I0920 18:04:11.956699  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 72/120
	I0920 18:04:12.958302  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 73/120
	I0920 18:04:13.960941  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 74/120
	I0920 18:04:14.963375  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 75/120
	I0920 18:04:15.964972  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 76/120
	I0920 18:04:16.966349  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 77/120
	I0920 18:04:17.968442  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 78/120
	I0920 18:04:18.969971  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 79/120
	I0920 18:04:19.972535  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 80/120
	I0920 18:04:20.973961  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 81/120
	I0920 18:04:21.975440  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 82/120
	I0920 18:04:22.976950  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 83/120
	I0920 18:04:23.978621  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 84/120
	I0920 18:04:24.980039  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 85/120
	I0920 18:04:25.981454  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 86/120
	I0920 18:04:26.982821  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 87/120
	I0920 18:04:27.984393  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 88/120
	I0920 18:04:28.986135  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 89/120
	I0920 18:04:29.988296  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 90/120
	I0920 18:04:30.989805  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 91/120
	I0920 18:04:31.991166  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 92/120
	I0920 18:04:32.992952  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 93/120
	I0920 18:04:33.995409  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 94/120
	I0920 18:04:34.997550  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 95/120
	I0920 18:04:35.999251  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 96/120
	I0920 18:04:37.000616  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 97/120
	I0920 18:04:38.002293  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 98/120
	I0920 18:04:39.004433  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 99/120
	I0920 18:04:40.007110  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 100/120
	I0920 18:04:41.008700  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 101/120
	I0920 18:04:42.011184  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 102/120
	I0920 18:04:43.013168  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 103/120
	I0920 18:04:44.014877  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 104/120
	I0920 18:04:45.017435  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 105/120
	I0920 18:04:46.019166  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 106/120
	I0920 18:04:47.021108  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 107/120
	I0920 18:04:48.022850  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 108/120
	I0920 18:04:49.024864  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 109/120
	I0920 18:04:50.026568  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 110/120
	I0920 18:04:51.028633  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 111/120
	I0920 18:04:52.030291  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 112/120
	I0920 18:04:53.031747  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 113/120
	I0920 18:04:54.033245  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 114/120
	I0920 18:04:55.035357  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 115/120
	I0920 18:04:56.037262  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 116/120
	I0920 18:04:57.038757  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 117/120
	I0920 18:04:58.041164  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 118/120
	I0920 18:04:59.042647  260608 main.go:141] libmachine: (ha-347193-m02) Waiting for machine to stop 119/120
	I0920 18:05:00.044036  260608 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 18:05:00.044180  260608 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-347193 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr
E0920 18:05:13.348309  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr: (18.914972364s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-347193 -n ha-347193
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-347193 logs -n 25: (1.499949165s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3833348347/001/cp-test_ha-347193-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193:/home/docker/cp-test_ha-347193-m03_ha-347193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193 sudo cat                                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m03_ha-347193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m02:/home/docker/cp-test_ha-347193-m03_ha-347193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m02 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m03_ha-347193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04:/home/docker/cp-test_ha-347193-m03_ha-347193-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m04 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m03_ha-347193-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-347193 cp testdata/cp-test.txt                                                | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3833348347/001/cp-test_ha-347193-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193:/home/docker/cp-test_ha-347193-m04_ha-347193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193 sudo cat                                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m04_ha-347193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m02:/home/docker/cp-test_ha-347193-m04_ha-347193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m02 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m04_ha-347193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03:/home/docker/cp-test_ha-347193-m04_ha-347193-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m03 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m04_ha-347193-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-347193 node stop m02 -v=7                                                     | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:58:19
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:58:19.719554  256536 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:58:19.719784  256536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:58:19.719792  256536 out.go:358] Setting ErrFile to fd 2...
	I0920 17:58:19.719796  256536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:58:19.719960  256536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 17:58:19.720540  256536 out.go:352] Setting JSON to false
	I0920 17:58:19.721444  256536 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6043,"bootTime":1726849057,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:58:19.721554  256536 start.go:139] virtualization: kvm guest
	I0920 17:58:19.723941  256536 out.go:177] * [ha-347193] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:58:19.725468  256536 notify.go:220] Checking for updates...
	I0920 17:58:19.725480  256536 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 17:58:19.727002  256536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:58:19.728644  256536 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:58:19.730001  256536 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:58:19.731378  256536 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:58:19.732922  256536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:58:19.734763  256536 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:58:19.774481  256536 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 17:58:19.776642  256536 start.go:297] selected driver: kvm2
	I0920 17:58:19.776667  256536 start.go:901] validating driver "kvm2" against <nil>
	I0920 17:58:19.776681  256536 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:58:19.777528  256536 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:58:19.777634  256536 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:58:19.794619  256536 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:58:19.795141  256536 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:58:19.795583  256536 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:58:19.795675  256536 cni.go:84] Creating CNI manager for ""
	I0920 17:58:19.795761  256536 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 17:58:19.795792  256536 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 17:58:19.795946  256536 start.go:340] cluster config:
	{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0920 17:58:19.796187  256536 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:58:19.798837  256536 out.go:177] * Starting "ha-347193" primary control-plane node in "ha-347193" cluster
	I0920 17:58:19.800296  256536 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:58:19.800352  256536 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 17:58:19.800362  256536 cache.go:56] Caching tarball of preloaded images
	I0920 17:58:19.800459  256536 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:58:19.800470  256536 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:58:19.800790  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 17:58:19.800819  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json: {Name:mkfd3b988e8aa616e3cc88608f2502239f4ba220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:19.800990  256536 start.go:360] acquireMachinesLock for ha-347193: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:58:19.801023  256536 start.go:364] duration metric: took 17.719µs to acquireMachinesLock for "ha-347193"
	I0920 17:58:19.801041  256536 start.go:93] Provisioning new machine with config: &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:58:19.801110  256536 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 17:58:19.803289  256536 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:58:19.803488  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:58:19.803546  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:58:19.819050  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41053
	I0920 17:58:19.819630  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:58:19.820279  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:58:19.820296  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:58:19.820691  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:58:19.820938  256536 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 17:58:19.821115  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:19.821335  256536 start.go:159] libmachine.API.Create for "ha-347193" (driver="kvm2")
	I0920 17:58:19.821366  256536 client.go:168] LocalClient.Create starting
	I0920 17:58:19.821397  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 17:58:19.821431  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 17:58:19.821444  256536 main.go:141] libmachine: Parsing certificate...
	I0920 17:58:19.821515  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 17:58:19.821537  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 17:58:19.821546  256536 main.go:141] libmachine: Parsing certificate...
	I0920 17:58:19.821560  256536 main.go:141] libmachine: Running pre-create checks...
	I0920 17:58:19.821570  256536 main.go:141] libmachine: (ha-347193) Calling .PreCreateCheck
	I0920 17:58:19.821998  256536 main.go:141] libmachine: (ha-347193) Calling .GetConfigRaw
	I0920 17:58:19.822485  256536 main.go:141] libmachine: Creating machine...
	I0920 17:58:19.822507  256536 main.go:141] libmachine: (ha-347193) Calling .Create
	I0920 17:58:19.822712  256536 main.go:141] libmachine: (ha-347193) Creating KVM machine...
	I0920 17:58:19.824224  256536 main.go:141] libmachine: (ha-347193) DBG | found existing default KVM network
	I0920 17:58:19.824984  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:19.824842  256559 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000221330}
	I0920 17:58:19.825024  256536 main.go:141] libmachine: (ha-347193) DBG | created network xml: 
	I0920 17:58:19.825037  256536 main.go:141] libmachine: (ha-347193) DBG | <network>
	I0920 17:58:19.825044  256536 main.go:141] libmachine: (ha-347193) DBG |   <name>mk-ha-347193</name>
	I0920 17:58:19.825049  256536 main.go:141] libmachine: (ha-347193) DBG |   <dns enable='no'/>
	I0920 17:58:19.825054  256536 main.go:141] libmachine: (ha-347193) DBG |   
	I0920 17:58:19.825061  256536 main.go:141] libmachine: (ha-347193) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 17:58:19.825067  256536 main.go:141] libmachine: (ha-347193) DBG |     <dhcp>
	I0920 17:58:19.825072  256536 main.go:141] libmachine: (ha-347193) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 17:58:19.825079  256536 main.go:141] libmachine: (ha-347193) DBG |     </dhcp>
	I0920 17:58:19.825084  256536 main.go:141] libmachine: (ha-347193) DBG |   </ip>
	I0920 17:58:19.825090  256536 main.go:141] libmachine: (ha-347193) DBG |   
	I0920 17:58:19.825094  256536 main.go:141] libmachine: (ha-347193) DBG | </network>
	I0920 17:58:19.825099  256536 main.go:141] libmachine: (ha-347193) DBG | 
	I0920 17:58:19.830808  256536 main.go:141] libmachine: (ha-347193) DBG | trying to create private KVM network mk-ha-347193 192.168.39.0/24...
	I0920 17:58:19.907893  256536 main.go:141] libmachine: (ha-347193) DBG | private KVM network mk-ha-347193 192.168.39.0/24 created
	I0920 17:58:19.907950  256536 main.go:141] libmachine: (ha-347193) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193 ...
	I0920 17:58:19.907968  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:19.907787  256559 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:58:19.907992  256536 main.go:141] libmachine: (ha-347193) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 17:58:19.908014  256536 main.go:141] libmachine: (ha-347193) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 17:58:20.183507  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:20.183335  256559 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa...
	I0920 17:58:20.394510  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:20.394309  256559 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/ha-347193.rawdisk...
	I0920 17:58:20.394561  256536 main.go:141] libmachine: (ha-347193) DBG | Writing magic tar header
	I0920 17:58:20.394576  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193 (perms=drwx------)
	I0920 17:58:20.394593  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:58:20.394599  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 17:58:20.394610  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 17:58:20.394615  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:58:20.394629  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:58:20.394637  256536 main.go:141] libmachine: (ha-347193) DBG | Writing SSH key tar header
	I0920 17:58:20.394645  256536 main.go:141] libmachine: (ha-347193) Creating domain...
	I0920 17:58:20.394695  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:20.394434  256559 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193 ...
	I0920 17:58:20.394726  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193
	I0920 17:58:20.394740  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 17:58:20.394750  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:58:20.394760  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 17:58:20.394766  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:58:20.394776  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:58:20.394781  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home
	I0920 17:58:20.394791  256536 main.go:141] libmachine: (ha-347193) DBG | Skipping /home - not owner
	I0920 17:58:20.396055  256536 main.go:141] libmachine: (ha-347193) define libvirt domain using xml: 
	I0920 17:58:20.396079  256536 main.go:141] libmachine: (ha-347193) <domain type='kvm'>
	I0920 17:58:20.396085  256536 main.go:141] libmachine: (ha-347193)   <name>ha-347193</name>
	I0920 17:58:20.396090  256536 main.go:141] libmachine: (ha-347193)   <memory unit='MiB'>2200</memory>
	I0920 17:58:20.396095  256536 main.go:141] libmachine: (ha-347193)   <vcpu>2</vcpu>
	I0920 17:58:20.396099  256536 main.go:141] libmachine: (ha-347193)   <features>
	I0920 17:58:20.396104  256536 main.go:141] libmachine: (ha-347193)     <acpi/>
	I0920 17:58:20.396108  256536 main.go:141] libmachine: (ha-347193)     <apic/>
	I0920 17:58:20.396113  256536 main.go:141] libmachine: (ha-347193)     <pae/>
	I0920 17:58:20.396121  256536 main.go:141] libmachine: (ha-347193)     
	I0920 17:58:20.396125  256536 main.go:141] libmachine: (ha-347193)   </features>
	I0920 17:58:20.396130  256536 main.go:141] libmachine: (ha-347193)   <cpu mode='host-passthrough'>
	I0920 17:58:20.396135  256536 main.go:141] libmachine: (ha-347193)   
	I0920 17:58:20.396139  256536 main.go:141] libmachine: (ha-347193)   </cpu>
	I0920 17:58:20.396144  256536 main.go:141] libmachine: (ha-347193)   <os>
	I0920 17:58:20.396150  256536 main.go:141] libmachine: (ha-347193)     <type>hvm</type>
	I0920 17:58:20.396155  256536 main.go:141] libmachine: (ha-347193)     <boot dev='cdrom'/>
	I0920 17:58:20.396161  256536 main.go:141] libmachine: (ha-347193)     <boot dev='hd'/>
	I0920 17:58:20.396220  256536 main.go:141] libmachine: (ha-347193)     <bootmenu enable='no'/>
	I0920 17:58:20.396253  256536 main.go:141] libmachine: (ha-347193)   </os>
	I0920 17:58:20.396265  256536 main.go:141] libmachine: (ha-347193)   <devices>
	I0920 17:58:20.396277  256536 main.go:141] libmachine: (ha-347193)     <disk type='file' device='cdrom'>
	I0920 17:58:20.396294  256536 main.go:141] libmachine: (ha-347193)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/boot2docker.iso'/>
	I0920 17:58:20.396309  256536 main.go:141] libmachine: (ha-347193)       <target dev='hdc' bus='scsi'/>
	I0920 17:58:20.396321  256536 main.go:141] libmachine: (ha-347193)       <readonly/>
	I0920 17:58:20.396335  256536 main.go:141] libmachine: (ha-347193)     </disk>
	I0920 17:58:20.396350  256536 main.go:141] libmachine: (ha-347193)     <disk type='file' device='disk'>
	I0920 17:58:20.396362  256536 main.go:141] libmachine: (ha-347193)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:58:20.396376  256536 main.go:141] libmachine: (ha-347193)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/ha-347193.rawdisk'/>
	I0920 17:58:20.396387  256536 main.go:141] libmachine: (ha-347193)       <target dev='hda' bus='virtio'/>
	I0920 17:58:20.396398  256536 main.go:141] libmachine: (ha-347193)     </disk>
	I0920 17:58:20.396413  256536 main.go:141] libmachine: (ha-347193)     <interface type='network'>
	I0920 17:58:20.396427  256536 main.go:141] libmachine: (ha-347193)       <source network='mk-ha-347193'/>
	I0920 17:58:20.396437  256536 main.go:141] libmachine: (ha-347193)       <model type='virtio'/>
	I0920 17:58:20.396449  256536 main.go:141] libmachine: (ha-347193)     </interface>
	I0920 17:58:20.396460  256536 main.go:141] libmachine: (ha-347193)     <interface type='network'>
	I0920 17:58:20.396470  256536 main.go:141] libmachine: (ha-347193)       <source network='default'/>
	I0920 17:58:20.396484  256536 main.go:141] libmachine: (ha-347193)       <model type='virtio'/>
	I0920 17:58:20.396495  256536 main.go:141] libmachine: (ha-347193)     </interface>
	I0920 17:58:20.396502  256536 main.go:141] libmachine: (ha-347193)     <serial type='pty'>
	I0920 17:58:20.396514  256536 main.go:141] libmachine: (ha-347193)       <target port='0'/>
	I0920 17:58:20.396524  256536 main.go:141] libmachine: (ha-347193)     </serial>
	I0920 17:58:20.396535  256536 main.go:141] libmachine: (ha-347193)     <console type='pty'>
	I0920 17:58:20.396546  256536 main.go:141] libmachine: (ha-347193)       <target type='serial' port='0'/>
	I0920 17:58:20.396570  256536 main.go:141] libmachine: (ha-347193)     </console>
	I0920 17:58:20.396588  256536 main.go:141] libmachine: (ha-347193)     <rng model='virtio'>
	I0920 17:58:20.396595  256536 main.go:141] libmachine: (ha-347193)       <backend model='random'>/dev/random</backend>
	I0920 17:58:20.396604  256536 main.go:141] libmachine: (ha-347193)     </rng>
	I0920 17:58:20.396635  256536 main.go:141] libmachine: (ha-347193)     
	I0920 17:58:20.396657  256536 main.go:141] libmachine: (ha-347193)     
	I0920 17:58:20.396672  256536 main.go:141] libmachine: (ha-347193)   </devices>
	I0920 17:58:20.396680  256536 main.go:141] libmachine: (ha-347193) </domain>
	I0920 17:58:20.396699  256536 main.go:141] libmachine: (ha-347193) 
	I0920 17:58:20.401190  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:83:b4:8d in network default
	I0920 17:58:20.401745  256536 main.go:141] libmachine: (ha-347193) Ensuring networks are active...
	I0920 17:58:20.401764  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:20.402424  256536 main.go:141] libmachine: (ha-347193) Ensuring network default is active
	I0920 17:58:20.402677  256536 main.go:141] libmachine: (ha-347193) Ensuring network mk-ha-347193 is active
	I0920 17:58:20.403127  256536 main.go:141] libmachine: (ha-347193) Getting domain xml...
	I0920 17:58:20.403705  256536 main.go:141] libmachine: (ha-347193) Creating domain...
	I0920 17:58:21.630872  256536 main.go:141] libmachine: (ha-347193) Waiting to get IP...
	I0920 17:58:21.631658  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:21.632047  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:21.632073  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:21.632024  256559 retry.go:31] will retry after 215.475523ms: waiting for machine to come up
	I0920 17:58:21.849753  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:21.850279  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:21.850310  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:21.850240  256559 retry.go:31] will retry after 263.201454ms: waiting for machine to come up
	I0920 17:58:22.114802  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:22.115310  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:22.115338  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:22.115259  256559 retry.go:31] will retry after 445.148422ms: waiting for machine to come up
	I0920 17:58:22.562073  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:22.562548  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:22.562573  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:22.562510  256559 retry.go:31] will retry after 558.224345ms: waiting for machine to come up
	I0920 17:58:23.122632  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:23.123096  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:23.123123  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:23.123050  256559 retry.go:31] will retry after 528.914105ms: waiting for machine to come up
	I0920 17:58:23.654056  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:23.654437  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:23.654467  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:23.654380  256559 retry.go:31] will retry after 657.509004ms: waiting for machine to come up
	I0920 17:58:24.313318  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:24.313802  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:24.313857  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:24.313765  256559 retry.go:31] will retry after 757.318604ms: waiting for machine to come up
	I0920 17:58:25.072515  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:25.072965  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:25.072995  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:25.072907  256559 retry.go:31] will retry after 1.361384929s: waiting for machine to come up
	I0920 17:58:26.435555  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:26.436017  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:26.436061  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:26.435982  256559 retry.go:31] will retry after 1.541186599s: waiting for machine to come up
	I0920 17:58:27.979940  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:27.980429  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:27.980460  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:27.980357  256559 retry.go:31] will retry after 1.786301166s: waiting for machine to come up
	I0920 17:58:29.767912  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:29.768468  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:29.768491  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:29.768439  256559 retry.go:31] will retry after 1.809883951s: waiting for machine to come up
	I0920 17:58:31.581113  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:31.581588  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:31.581619  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:31.581535  256559 retry.go:31] will retry after 3.405747274s: waiting for machine to come up
	I0920 17:58:34.988932  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:34.989387  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:34.989410  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:34.989369  256559 retry.go:31] will retry after 3.845362816s: waiting for machine to come up
	I0920 17:58:38.839191  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:38.839734  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:38.839759  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:38.839690  256559 retry.go:31] will retry after 3.611631644s: waiting for machine to come up
	I0920 17:58:42.454482  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.454977  256536 main.go:141] libmachine: (ha-347193) Found IP for machine: 192.168.39.246
	I0920 17:58:42.455003  256536 main.go:141] libmachine: (ha-347193) Reserving static IP address...
	I0920 17:58:42.455016  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has current primary IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.455495  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find host DHCP lease matching {name: "ha-347193", mac: "52:54:00:2e:07:bb", ip: "192.168.39.246"} in network mk-ha-347193
	I0920 17:58:42.533022  256536 main.go:141] libmachine: (ha-347193) DBG | Getting to WaitForSSH function...
	I0920 17:58:42.533056  256536 main.go:141] libmachine: (ha-347193) Reserved static IP address: 192.168.39.246
	I0920 17:58:42.533070  256536 main.go:141] libmachine: (ha-347193) Waiting for SSH to be available...
	I0920 17:58:42.535894  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.536329  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:42.536361  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.536501  256536 main.go:141] libmachine: (ha-347193) DBG | Using SSH client type: external
	I0920 17:58:42.536525  256536 main.go:141] libmachine: (ha-347193) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa (-rw-------)
	I0920 17:58:42.536553  256536 main.go:141] libmachine: (ha-347193) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:58:42.536592  256536 main.go:141] libmachine: (ha-347193) DBG | About to run SSH command:
	I0920 17:58:42.536627  256536 main.go:141] libmachine: (ha-347193) DBG | exit 0
	I0920 17:58:42.662095  256536 main.go:141] libmachine: (ha-347193) DBG | SSH cmd err, output: <nil>: 
	I0920 17:58:42.662356  256536 main.go:141] libmachine: (ha-347193) KVM machine creation complete!
	I0920 17:58:42.662742  256536 main.go:141] libmachine: (ha-347193) Calling .GetConfigRaw
	I0920 17:58:42.663393  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:42.663609  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:42.663783  256536 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:58:42.663799  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 17:58:42.665335  256536 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:58:42.665349  256536 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:58:42.665355  256536 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:58:42.665361  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:42.667970  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.668505  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:42.668538  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.668703  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:42.668963  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.669124  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.669264  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:42.669457  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:42.669727  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:42.669743  256536 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:58:42.777219  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:58:42.777243  256536 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:58:42.777251  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:42.779860  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.780225  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:42.780252  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.780402  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:42.780602  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.780743  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.780837  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:42.781037  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:42.781263  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:42.781279  256536 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:58:42.886633  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:58:42.886732  256536 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:58:42.886747  256536 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:58:42.886757  256536 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 17:58:42.887046  256536 buildroot.go:166] provisioning hostname "ha-347193"
	I0920 17:58:42.887073  256536 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 17:58:42.887313  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:42.889831  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.890182  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:42.890207  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.890355  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:42.890545  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.890718  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.890846  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:42.891093  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:42.891253  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:42.891265  256536 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-347193 && echo "ha-347193" | sudo tee /etc/hostname
	I0920 17:58:43.011225  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-347193
	
	I0920 17:58:43.011253  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.014324  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.014803  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.014831  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.015003  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.015234  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.015466  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.015676  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.015888  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:43.016055  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:43.016070  256536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-347193' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-347193/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-347193' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:58:43.130242  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:58:43.130286  256536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 17:58:43.130357  256536 buildroot.go:174] setting up certificates
	I0920 17:58:43.130379  256536 provision.go:84] configureAuth start
	I0920 17:58:43.130401  256536 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 17:58:43.130726  256536 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 17:58:43.133505  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.133825  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.133848  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.134052  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.136401  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.136730  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.136750  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.136952  256536 provision.go:143] copyHostCerts
	I0920 17:58:43.136981  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 17:58:43.137013  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 17:58:43.137030  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 17:58:43.137096  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 17:58:43.137174  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 17:58:43.137193  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 17:58:43.137199  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 17:58:43.137223  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 17:58:43.137264  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 17:58:43.137284  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 17:58:43.137292  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 17:58:43.137312  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 17:58:43.137361  256536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.ha-347193 san=[127.0.0.1 192.168.39.246 ha-347193 localhost minikube]
	I0920 17:58:43.262974  256536 provision.go:177] copyRemoteCerts
	I0920 17:58:43.263055  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:58:43.263085  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.265602  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.265934  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.265962  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.266136  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.266349  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.266507  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.266640  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:58:43.348226  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:58:43.348355  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 17:58:43.371291  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:58:43.371380  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 17:58:43.393409  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:58:43.393490  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:58:43.417165  256536 provision.go:87] duration metric: took 286.759784ms to configureAuth
	I0920 17:58:43.417200  256536 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:58:43.417422  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:58:43.417508  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.420548  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.420826  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.420856  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.421056  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.421256  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.421438  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.421576  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.421745  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:43.422081  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:43.422105  256536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:58:43.638028  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:58:43.638062  256536 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:58:43.638075  256536 main.go:141] libmachine: (ha-347193) Calling .GetURL
	I0920 17:58:43.639465  256536 main.go:141] libmachine: (ha-347193) DBG | Using libvirt version 6000000
	I0920 17:58:43.641835  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.642260  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.642284  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.642472  256536 main.go:141] libmachine: Docker is up and running!
	I0920 17:58:43.642489  256536 main.go:141] libmachine: Reticulating splines...
	I0920 17:58:43.642498  256536 client.go:171] duration metric: took 23.821123659s to LocalClient.Create
	I0920 17:58:43.642520  256536 start.go:167] duration metric: took 23.821189376s to libmachine.API.Create "ha-347193"
	I0920 17:58:43.642527  256536 start.go:293] postStartSetup for "ha-347193" (driver="kvm2")
	I0920 17:58:43.642537  256536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:58:43.642552  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:43.642767  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:58:43.642797  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.645726  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.646207  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.646228  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.646384  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.646562  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.646731  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.646875  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:58:43.732855  256536 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:58:43.737146  256536 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:58:43.737179  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 17:58:43.737266  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 17:58:43.737348  256536 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 17:58:43.737360  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /etc/ssl/certs/2448492.pem
	I0920 17:58:43.737457  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:58:43.746873  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 17:58:43.769680  256536 start.go:296] duration metric: took 127.135312ms for postStartSetup
	I0920 17:58:43.769753  256536 main.go:141] libmachine: (ha-347193) Calling .GetConfigRaw
	I0920 17:58:43.770539  256536 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 17:58:43.773368  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.773790  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.773812  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.774131  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 17:58:43.774327  256536 start.go:128] duration metric: took 23.973205594s to createHost
	I0920 17:58:43.774352  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.776811  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.777154  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.777173  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.777359  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.777566  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.777714  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.777851  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.778046  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:43.778254  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:43.778275  256536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:58:43.886468  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855123.865489975
	
	I0920 17:58:43.886492  256536 fix.go:216] guest clock: 1726855123.865489975
	I0920 17:58:43.886500  256536 fix.go:229] Guest: 2024-09-20 17:58:43.865489975 +0000 UTC Remote: 2024-09-20 17:58:43.77433865 +0000 UTC m=+24.090830996 (delta=91.151325ms)
	I0920 17:58:43.886521  256536 fix.go:200] guest clock delta is within tolerance: 91.151325ms
	I0920 17:58:43.886526  256536 start.go:83] releasing machines lock for "ha-347193", held for 24.085494311s
	I0920 17:58:43.886548  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:43.886838  256536 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 17:58:43.889513  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.889872  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.889896  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.890072  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:43.890584  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:43.890771  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:43.890844  256536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:58:43.890926  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.890977  256536 ssh_runner.go:195] Run: cat /version.json
	I0920 17:58:43.891005  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.893664  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.894009  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.894036  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.894186  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.894206  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.894370  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.894560  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.894569  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.894586  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.894713  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:58:43.894782  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.894935  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.895088  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.895207  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:58:43.976109  256536 ssh_runner.go:195] Run: systemctl --version
	I0920 17:58:44.018728  256536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:58:44.175337  256536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:58:44.181194  256536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:58:44.181279  256536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:58:44.199685  256536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:58:44.199719  256536 start.go:495] detecting cgroup driver to use...
	I0920 17:58:44.199799  256536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:58:44.215955  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:58:44.230482  256536 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:58:44.230549  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:58:44.244728  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:58:44.258137  256536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:58:44.370456  256536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:58:44.514103  256536 docker.go:233] disabling docker service ...
	I0920 17:58:44.514175  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:58:44.536863  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:58:44.550231  256536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:58:44.683486  256536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:58:44.793154  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:58:44.806166  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:58:44.823607  256536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:58:44.823754  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.833725  256536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:58:44.833789  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.843703  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.853327  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.862729  256536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:58:44.872472  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.882312  256536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.898952  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.908482  256536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:58:44.917186  256536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:58:44.917249  256536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:58:44.928614  256536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:58:44.938764  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:58:45.045827  256536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:58:45.135797  256536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:58:45.135868  256536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:58:45.140339  256536 start.go:563] Will wait 60s for crictl version
	I0920 17:58:45.140407  256536 ssh_runner.go:195] Run: which crictl
	I0920 17:58:45.144096  256536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:58:45.187435  256536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:58:45.187543  256536 ssh_runner.go:195] Run: crio --version
	I0920 17:58:45.213699  256536 ssh_runner.go:195] Run: crio --version
	I0920 17:58:45.242965  256536 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:58:45.244260  256536 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 17:58:45.247006  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:45.247310  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:45.247334  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:45.247515  256536 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:58:45.251447  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:58:45.263292  256536 kubeadm.go:883] updating cluster {Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:58:45.263401  256536 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:58:45.263455  256536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:58:45.293889  256536 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 17:58:45.293981  256536 ssh_runner.go:195] Run: which lz4
	I0920 17:58:45.297564  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0920 17:58:45.297677  256536 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 17:58:45.301429  256536 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 17:58:45.301465  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 17:58:46.526820  256536 crio.go:462] duration metric: took 1.229164304s to copy over tarball
	I0920 17:58:46.526906  256536 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 17:58:48.552055  256536 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.025114598s)
	I0920 17:58:48.552091  256536 crio.go:469] duration metric: took 2.025229025s to extract the tarball
	I0920 17:58:48.552101  256536 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 17:58:48.595514  256536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:58:48.637483  256536 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:58:48.637509  256536 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:58:48.637517  256536 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.31.1 crio true true} ...
	I0920 17:58:48.637615  256536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-347193 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:58:48.637681  256536 ssh_runner.go:195] Run: crio config
	I0920 17:58:48.685785  256536 cni.go:84] Creating CNI manager for ""
	I0920 17:58:48.685807  256536 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 17:58:48.685817  256536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:58:48.685841  256536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-347193 NodeName:ha-347193 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:58:48.686000  256536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-347193"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:58:48.686029  256536 kube-vip.go:115] generating kube-vip config ...
	I0920 17:58:48.686069  256536 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:58:48.702147  256536 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:58:48.702255  256536 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:58:48.702306  256536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:58:48.711975  256536 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:58:48.712116  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 17:58:48.721456  256536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 17:58:48.737853  256536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:58:48.754664  256536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 17:58:48.771220  256536 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0920 17:58:48.786667  256536 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:58:48.790274  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:58:48.802824  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:58:48.920298  256536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:58:48.937204  256536 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193 for IP: 192.168.39.246
	I0920 17:58:48.937241  256536 certs.go:194] generating shared ca certs ...
	I0920 17:58:48.937263  256536 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:48.937423  256536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 17:58:48.937475  256536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 17:58:48.937490  256536 certs.go:256] generating profile certs ...
	I0920 17:58:48.937561  256536 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key
	I0920 17:58:48.937579  256536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt with IP's: []
	I0920 17:58:49.084514  256536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt ...
	I0920 17:58:49.084549  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt: {Name:mk13d47d95d81e73445ca468d2d07a6230b36ca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.084751  256536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key ...
	I0920 17:58:49.084769  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key: {Name:mk2e8c8a89fbce74c4a6cf70a50b1649d0b0d470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.084875  256536 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.3b9d2b82
	I0920 17:58:49.084895  256536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.3b9d2b82 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.254]
	I0920 17:58:49.268687  256536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.3b9d2b82 ...
	I0920 17:58:49.268724  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.3b9d2b82: {Name:mkc4d8dcb610e2c55a07bec95a2587e189c4dfcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.268922  256536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.3b9d2b82 ...
	I0920 17:58:49.268941  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.3b9d2b82: {Name:mk97e4ea20b46f77acfe6f051b666b6376a68732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.269045  256536 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.3b9d2b82 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt
	I0920 17:58:49.269140  256536 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.3b9d2b82 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key
	I0920 17:58:49.269224  256536 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key
	I0920 17:58:49.269247  256536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt with IP's: []
	I0920 17:58:49.848819  256536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt ...
	I0920 17:58:49.848866  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt: {Name:mk6162fd8372a3b1149ed5cf0cc51090f3274530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.849075  256536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key ...
	I0920 17:58:49.849088  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key: {Name:mk1d07a6aa2e0b7041a110499c13eb6b4fb89fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.849167  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:58:49.849186  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:58:49.849200  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:58:49.849215  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:58:49.849230  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:58:49.849245  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:58:49.849263  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:58:49.849275  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:58:49.849331  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 17:58:49.849370  256536 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 17:58:49.849382  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:58:49.849407  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 17:58:49.849435  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:58:49.849460  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 17:58:49.849503  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 17:58:49.849533  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:58:49.849550  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem -> /usr/share/ca-certificates/244849.pem
	I0920 17:58:49.849572  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /usr/share/ca-certificates/2448492.pem
	I0920 17:58:49.850129  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:58:49.878422  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:58:49.902242  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:58:49.926391  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 17:58:49.950027  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 17:58:49.972641  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 17:58:49.997022  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:58:50.021804  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:58:50.045879  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:58:50.069136  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 17:58:50.092444  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 17:58:50.116716  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:58:50.136353  256536 ssh_runner.go:195] Run: openssl version
	I0920 17:58:50.145863  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:58:50.157513  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:58:50.162700  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:58:50.162778  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:58:50.168948  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:58:50.180125  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 17:58:50.192366  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 17:58:50.197085  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 17:58:50.197163  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 17:58:50.203424  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 17:58:50.216229  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 17:58:50.228077  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 17:58:50.233241  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 17:58:50.233312  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 17:58:50.240012  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:58:50.251599  256536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:58:50.256160  256536 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:58:50.256224  256536 kubeadm.go:392] StartCluster: {Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:58:50.256322  256536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 17:58:50.256375  256536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 17:58:50.298938  256536 cri.go:89] found id: ""
	I0920 17:58:50.299007  256536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:58:50.309387  256536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:58:50.319684  256536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:58:50.330318  256536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:58:50.330339  256536 kubeadm.go:157] found existing configuration files:
	
	I0920 17:58:50.330388  256536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:58:50.339356  256536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:58:50.339424  256536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:58:50.348952  256536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:58:50.357964  256536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:58:50.358028  256536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:58:50.367163  256536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:58:50.376370  256536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:58:50.376452  256536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:58:50.385926  256536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:58:50.395143  256536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:58:50.395230  256536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:58:50.405341  256536 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 17:58:50.519254  256536 kubeadm.go:310] W0920 17:58:50.504659     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:58:50.520220  256536 kubeadm.go:310] W0920 17:58:50.505817     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:58:50.645093  256536 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:59:01.982945  256536 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 17:59:01.983025  256536 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 17:59:01.983103  256536 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 17:59:01.983216  256536 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 17:59:01.983302  256536 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 17:59:01.983352  256536 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 17:59:01.985269  256536 out.go:235]   - Generating certificates and keys ...
	I0920 17:59:01.985356  256536 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 17:59:01.985409  256536 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 17:59:01.985500  256536 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 17:59:01.985582  256536 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 17:59:01.985647  256536 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 17:59:01.985692  256536 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 17:59:01.985749  256536 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 17:59:01.985852  256536 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-347193 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I0920 17:59:01.985922  256536 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 17:59:01.986042  256536 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-347193 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I0920 17:59:01.986131  256536 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 17:59:01.986209  256536 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 17:59:01.986270  256536 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 17:59:01.986323  256536 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 17:59:01.986367  256536 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 17:59:01.986420  256536 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 17:59:01.986465  256536 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 17:59:01.986546  256536 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 17:59:01.986640  256536 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 17:59:01.986748  256536 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 17:59:01.986815  256536 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 17:59:01.988640  256536 out.go:235]   - Booting up control plane ...
	I0920 17:59:01.988728  256536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 17:59:01.988790  256536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 17:59:01.988846  256536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 17:59:01.988962  256536 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 17:59:01.989082  256536 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 17:59:01.989168  256536 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 17:59:01.989296  256536 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 17:59:01.989387  256536 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 17:59:01.989445  256536 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001806633s
	I0920 17:59:01.989505  256536 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 17:59:01.989576  256536 kubeadm.go:310] [api-check] The API server is healthy after 5.617049153s
	I0920 17:59:01.989696  256536 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 17:59:01.989803  256536 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 17:59:01.989858  256536 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 17:59:01.990057  256536 kubeadm.go:310] [mark-control-plane] Marking the node ha-347193 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 17:59:01.990116  256536 kubeadm.go:310] [bootstrap-token] Using token: copxt9.xhya9dvcru2ncb8u
	I0920 17:59:01.991737  256536 out.go:235]   - Configuring RBAC rules ...
	I0920 17:59:01.991825  256536 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 17:59:01.991930  256536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 17:59:01.992134  256536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 17:59:01.992315  256536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 17:59:01.992430  256536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 17:59:01.992514  256536 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 17:59:01.992624  256536 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 17:59:01.992678  256536 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 17:59:01.992734  256536 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 17:59:01.992741  256536 kubeadm.go:310] 
	I0920 17:59:01.992825  256536 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 17:59:01.992833  256536 kubeadm.go:310] 
	I0920 17:59:01.992910  256536 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 17:59:01.992916  256536 kubeadm.go:310] 
	I0920 17:59:01.992954  256536 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 17:59:01.993039  256536 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 17:59:01.993097  256536 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 17:59:01.993103  256536 kubeadm.go:310] 
	I0920 17:59:01.993147  256536 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 17:59:01.993155  256536 kubeadm.go:310] 
	I0920 17:59:01.993208  256536 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 17:59:01.993218  256536 kubeadm.go:310] 
	I0920 17:59:01.993275  256536 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 17:59:01.993343  256536 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 17:59:01.993400  256536 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 17:59:01.993414  256536 kubeadm.go:310] 
	I0920 17:59:01.993487  256536 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 17:59:01.993558  256536 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 17:59:01.993564  256536 kubeadm.go:310] 
	I0920 17:59:01.993661  256536 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token copxt9.xhya9dvcru2ncb8u \
	I0920 17:59:01.993755  256536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 17:59:01.993786  256536 kubeadm.go:310] 	--control-plane 
	I0920 17:59:01.993795  256536 kubeadm.go:310] 
	I0920 17:59:01.993885  256536 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 17:59:01.993896  256536 kubeadm.go:310] 
	I0920 17:59:01.994008  256536 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token copxt9.xhya9dvcru2ncb8u \
	I0920 17:59:01.994126  256536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 17:59:01.994145  256536 cni.go:84] Creating CNI manager for ""
	I0920 17:59:01.994153  256536 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 17:59:01.995934  256536 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 17:59:01.997387  256536 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 17:59:02.002770  256536 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 17:59:02.002796  256536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 17:59:02.023932  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 17:59:02.397367  256536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 17:59:02.397459  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:02.397493  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-347193 minikube.k8s.io/updated_at=2024_09_20T17_59_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=ha-347193 minikube.k8s.io/primary=true
	I0920 17:59:02.423770  256536 ops.go:34] apiserver oom_adj: -16
	I0920 17:59:02.508023  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:03.008485  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:03.508182  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:04.008435  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:04.508089  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:05.009064  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:05.101282  256536 kubeadm.go:1113] duration metric: took 2.703897001s to wait for elevateKubeSystemPrivileges
	I0920 17:59:05.101325  256536 kubeadm.go:394] duration metric: took 14.845108845s to StartCluster
	I0920 17:59:05.101350  256536 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:59:05.101447  256536 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:59:05.102205  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:59:05.102460  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 17:59:05.102470  256536 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 17:59:05.102452  256536 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:59:05.102580  256536 addons.go:69] Setting default-storageclass=true in profile "ha-347193"
	I0920 17:59:05.102587  256536 start.go:241] waiting for startup goroutines ...
	I0920 17:59:05.102561  256536 addons.go:69] Setting storage-provisioner=true in profile "ha-347193"
	I0920 17:59:05.102601  256536 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-347193"
	I0920 17:59:05.102614  256536 addons.go:234] Setting addon storage-provisioner=true in "ha-347193"
	I0920 17:59:05.102655  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 17:59:05.102708  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:59:05.103073  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.103096  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.103105  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.103128  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.119041  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I0920 17:59:05.119120  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40515
	I0920 17:59:05.119527  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.119535  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.120054  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.120064  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.120077  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.120081  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.120411  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.120459  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.120594  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 17:59:05.120915  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.120945  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.123163  256536 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:59:05.123416  256536 kapi.go:59] client config for ha-347193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key", CAFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 17:59:05.123863  256536 cert_rotation.go:140] Starting client certificate rotation controller
	I0920 17:59:05.124188  256536 addons.go:234] Setting addon default-storageclass=true in "ha-347193"
	I0920 17:59:05.124232  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 17:59:05.124598  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.124630  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.136314  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I0920 17:59:05.136762  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.137268  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.137297  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.137618  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.137833  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 17:59:05.139657  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:59:05.139802  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38355
	I0920 17:59:05.140195  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.140708  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.140736  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.141146  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.141698  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.141724  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.141892  256536 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:59:05.143631  256536 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:59:05.143657  256536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:59:05.143686  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:59:05.146965  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:05.147514  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:59:05.147538  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:05.147705  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:59:05.147909  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:59:05.148047  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:59:05.148204  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:59:05.158393  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39527
	I0920 17:59:05.158953  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.159494  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.159527  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.159919  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.160100  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 17:59:05.161631  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:59:05.161924  256536 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:59:05.161945  256536 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:59:05.161964  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:59:05.164799  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:05.165159  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:59:05.165192  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:05.165404  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:59:05.165619  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:59:05.165790  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:59:05.165962  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:59:05.229095  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:59:05.299511  256536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:59:05.333515  256536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:59:05.572818  256536 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 17:59:05.872829  256536 main.go:141] libmachine: Making call to close driver server
	I0920 17:59:05.872867  256536 main.go:141] libmachine: (ha-347193) Calling .Close
	I0920 17:59:05.872944  256536 main.go:141] libmachine: Making call to close driver server
	I0920 17:59:05.872967  256536 main.go:141] libmachine: (ha-347193) Calling .Close
	I0920 17:59:05.873195  256536 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:59:05.873214  256536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:59:05.873224  256536 main.go:141] libmachine: Making call to close driver server
	I0920 17:59:05.873232  256536 main.go:141] libmachine: (ha-347193) Calling .Close
	I0920 17:59:05.873274  256536 main.go:141] libmachine: (ha-347193) DBG | Closing plugin on server side
	I0920 17:59:05.873310  256536 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:59:05.873317  256536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:59:05.873325  256536 main.go:141] libmachine: Making call to close driver server
	I0920 17:59:05.873332  256536 main.go:141] libmachine: (ha-347193) Calling .Close
	I0920 17:59:05.873517  256536 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:59:05.873541  256536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:59:05.873602  256536 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 17:59:05.873621  256536 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 17:59:05.873624  256536 main.go:141] libmachine: (ha-347193) DBG | Closing plugin on server side
	I0920 17:59:05.873718  256536 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:59:05.873742  256536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:59:05.873751  256536 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0920 17:59:05.873766  256536 round_trippers.go:469] Request Headers:
	I0920 17:59:05.873776  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:59:05.873785  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:59:05.888629  256536 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0920 17:59:05.889182  256536 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0920 17:59:05.889201  256536 round_trippers.go:469] Request Headers:
	I0920 17:59:05.889211  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:59:05.889215  256536 round_trippers.go:473]     Content-Type: application/json
	I0920 17:59:05.889223  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:59:05.892179  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:59:05.892357  256536 main.go:141] libmachine: Making call to close driver server
	I0920 17:59:05.892373  256536 main.go:141] libmachine: (ha-347193) Calling .Close
	I0920 17:59:05.892691  256536 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:59:05.892709  256536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:59:05.894279  256536 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0920 17:59:05.895496  256536 addons.go:510] duration metric: took 793.020671ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0920 17:59:05.895531  256536 start.go:246] waiting for cluster config update ...
	I0920 17:59:05.895542  256536 start.go:255] writing updated cluster config ...
	I0920 17:59:05.897257  256536 out.go:201] 
	I0920 17:59:05.898660  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:59:05.898730  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 17:59:05.900283  256536 out.go:177] * Starting "ha-347193-m02" control-plane node in "ha-347193" cluster
	I0920 17:59:05.901396  256536 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:59:05.901420  256536 cache.go:56] Caching tarball of preloaded images
	I0920 17:59:05.901510  256536 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:59:05.901521  256536 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:59:05.901597  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 17:59:05.901759  256536 start.go:360] acquireMachinesLock for ha-347193-m02: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:59:05.901802  256536 start.go:364] duration metric: took 24.671µs to acquireMachinesLock for "ha-347193-m02"
	I0920 17:59:05.901820  256536 start.go:93] Provisioning new machine with config: &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:59:05.901885  256536 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0920 17:59:05.903637  256536 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:59:05.903736  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.903765  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.919718  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37733
	I0920 17:59:05.920256  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.920760  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.920783  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.921213  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.921446  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetMachineName
	I0920 17:59:05.921623  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:05.921862  256536 start.go:159] libmachine.API.Create for "ha-347193" (driver="kvm2")
	I0920 17:59:05.921894  256536 client.go:168] LocalClient.Create starting
	I0920 17:59:05.921946  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 17:59:05.921992  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 17:59:05.922017  256536 main.go:141] libmachine: Parsing certificate...
	I0920 17:59:05.922095  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 17:59:05.922126  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 17:59:05.922142  256536 main.go:141] libmachine: Parsing certificate...
	I0920 17:59:05.922169  256536 main.go:141] libmachine: Running pre-create checks...
	I0920 17:59:05.922181  256536 main.go:141] libmachine: (ha-347193-m02) Calling .PreCreateCheck
	I0920 17:59:05.922398  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetConfigRaw
	I0920 17:59:05.922898  256536 main.go:141] libmachine: Creating machine...
	I0920 17:59:05.922915  256536 main.go:141] libmachine: (ha-347193-m02) Calling .Create
	I0920 17:59:05.923043  256536 main.go:141] libmachine: (ha-347193-m02) Creating KVM machine...
	I0920 17:59:05.924563  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found existing default KVM network
	I0920 17:59:05.924648  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found existing private KVM network mk-ha-347193
	I0920 17:59:05.924819  256536 main.go:141] libmachine: (ha-347193-m02) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02 ...
	I0920 17:59:05.924844  256536 main.go:141] libmachine: (ha-347193-m02) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 17:59:05.924904  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:05.924790  256915 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:59:05.925011  256536 main.go:141] libmachine: (ha-347193-m02) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 17:59:06.216167  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:06.216027  256915 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa...
	I0920 17:59:06.325597  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:06.325412  256915 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/ha-347193-m02.rawdisk...
	I0920 17:59:06.325640  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Writing magic tar header
	I0920 17:59:06.325658  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Writing SSH key tar header
	I0920 17:59:06.325672  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:06.325581  256915 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02 ...
	I0920 17:59:06.325689  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02
	I0920 17:59:06.325740  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02 (perms=drwx------)
	I0920 17:59:06.325762  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 17:59:06.325774  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:59:06.325786  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:59:06.325801  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 17:59:06.325822  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 17:59:06.325834  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:59:06.325857  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 17:59:06.325886  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:59:06.325897  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:59:06.325927  256536 main.go:141] libmachine: (ha-347193-m02) Creating domain...
	I0920 17:59:06.325957  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:59:06.325971  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home
	I0920 17:59:06.325982  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Skipping /home - not owner
	I0920 17:59:06.327271  256536 main.go:141] libmachine: (ha-347193-m02) define libvirt domain using xml: 
	I0920 17:59:06.327300  256536 main.go:141] libmachine: (ha-347193-m02) <domain type='kvm'>
	I0920 17:59:06.327310  256536 main.go:141] libmachine: (ha-347193-m02)   <name>ha-347193-m02</name>
	I0920 17:59:06.327317  256536 main.go:141] libmachine: (ha-347193-m02)   <memory unit='MiB'>2200</memory>
	I0920 17:59:06.327324  256536 main.go:141] libmachine: (ha-347193-m02)   <vcpu>2</vcpu>
	I0920 17:59:06.327330  256536 main.go:141] libmachine: (ha-347193-m02)   <features>
	I0920 17:59:06.327339  256536 main.go:141] libmachine: (ha-347193-m02)     <acpi/>
	I0920 17:59:06.327347  256536 main.go:141] libmachine: (ha-347193-m02)     <apic/>
	I0920 17:59:06.327356  256536 main.go:141] libmachine: (ha-347193-m02)     <pae/>
	I0920 17:59:06.327366  256536 main.go:141] libmachine: (ha-347193-m02)     
	I0920 17:59:06.327375  256536 main.go:141] libmachine: (ha-347193-m02)   </features>
	I0920 17:59:06.327386  256536 main.go:141] libmachine: (ha-347193-m02)   <cpu mode='host-passthrough'>
	I0920 17:59:06.327396  256536 main.go:141] libmachine: (ha-347193-m02)   
	I0920 17:59:06.327411  256536 main.go:141] libmachine: (ha-347193-m02)   </cpu>
	I0920 17:59:06.327426  256536 main.go:141] libmachine: (ha-347193-m02)   <os>
	I0920 17:59:06.327438  256536 main.go:141] libmachine: (ha-347193-m02)     <type>hvm</type>
	I0920 17:59:06.327452  256536 main.go:141] libmachine: (ha-347193-m02)     <boot dev='cdrom'/>
	I0920 17:59:06.327463  256536 main.go:141] libmachine: (ha-347193-m02)     <boot dev='hd'/>
	I0920 17:59:06.327471  256536 main.go:141] libmachine: (ha-347193-m02)     <bootmenu enable='no'/>
	I0920 17:59:06.327482  256536 main.go:141] libmachine: (ha-347193-m02)   </os>
	I0920 17:59:06.327490  256536 main.go:141] libmachine: (ha-347193-m02)   <devices>
	I0920 17:59:06.327501  256536 main.go:141] libmachine: (ha-347193-m02)     <disk type='file' device='cdrom'>
	I0920 17:59:06.327515  256536 main.go:141] libmachine: (ha-347193-m02)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/boot2docker.iso'/>
	I0920 17:59:06.327544  256536 main.go:141] libmachine: (ha-347193-m02)       <target dev='hdc' bus='scsi'/>
	I0920 17:59:06.327569  256536 main.go:141] libmachine: (ha-347193-m02)       <readonly/>
	I0920 17:59:06.327578  256536 main.go:141] libmachine: (ha-347193-m02)     </disk>
	I0920 17:59:06.327587  256536 main.go:141] libmachine: (ha-347193-m02)     <disk type='file' device='disk'>
	I0920 17:59:06.327597  256536 main.go:141] libmachine: (ha-347193-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:59:06.327607  256536 main.go:141] libmachine: (ha-347193-m02)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/ha-347193-m02.rawdisk'/>
	I0920 17:59:06.327619  256536 main.go:141] libmachine: (ha-347193-m02)       <target dev='hda' bus='virtio'/>
	I0920 17:59:06.327627  256536 main.go:141] libmachine: (ha-347193-m02)     </disk>
	I0920 17:59:06.327635  256536 main.go:141] libmachine: (ha-347193-m02)     <interface type='network'>
	I0920 17:59:06.327649  256536 main.go:141] libmachine: (ha-347193-m02)       <source network='mk-ha-347193'/>
	I0920 17:59:06.327659  256536 main.go:141] libmachine: (ha-347193-m02)       <model type='virtio'/>
	I0920 17:59:06.327669  256536 main.go:141] libmachine: (ha-347193-m02)     </interface>
	I0920 17:59:06.327680  256536 main.go:141] libmachine: (ha-347193-m02)     <interface type='network'>
	I0920 17:59:06.327690  256536 main.go:141] libmachine: (ha-347193-m02)       <source network='default'/>
	I0920 17:59:06.327701  256536 main.go:141] libmachine: (ha-347193-m02)       <model type='virtio'/>
	I0920 17:59:06.327711  256536 main.go:141] libmachine: (ha-347193-m02)     </interface>
	I0920 17:59:06.327722  256536 main.go:141] libmachine: (ha-347193-m02)     <serial type='pty'>
	I0920 17:59:06.327737  256536 main.go:141] libmachine: (ha-347193-m02)       <target port='0'/>
	I0920 17:59:06.327748  256536 main.go:141] libmachine: (ha-347193-m02)     </serial>
	I0920 17:59:06.327761  256536 main.go:141] libmachine: (ha-347193-m02)     <console type='pty'>
	I0920 17:59:06.327773  256536 main.go:141] libmachine: (ha-347193-m02)       <target type='serial' port='0'/>
	I0920 17:59:06.327786  256536 main.go:141] libmachine: (ha-347193-m02)     </console>
	I0920 17:59:06.327797  256536 main.go:141] libmachine: (ha-347193-m02)     <rng model='virtio'>
	I0920 17:59:06.327808  256536 main.go:141] libmachine: (ha-347193-m02)       <backend model='random'>/dev/random</backend>
	I0920 17:59:06.327819  256536 main.go:141] libmachine: (ha-347193-m02)     </rng>
	I0920 17:59:06.327825  256536 main.go:141] libmachine: (ha-347193-m02)     
	I0920 17:59:06.327833  256536 main.go:141] libmachine: (ha-347193-m02)     
	I0920 17:59:06.327840  256536 main.go:141] libmachine: (ha-347193-m02)   </devices>
	I0920 17:59:06.327847  256536 main.go:141] libmachine: (ha-347193-m02) </domain>
	I0920 17:59:06.327853  256536 main.go:141] libmachine: (ha-347193-m02) 
	I0920 17:59:06.335776  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:99:8b:51 in network default
	I0920 17:59:06.336465  256536 main.go:141] libmachine: (ha-347193-m02) Ensuring networks are active...
	I0920 17:59:06.336495  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:06.337274  256536 main.go:141] libmachine: (ha-347193-m02) Ensuring network default is active
	I0920 17:59:06.337717  256536 main.go:141] libmachine: (ha-347193-m02) Ensuring network mk-ha-347193 is active
	I0920 17:59:06.338271  256536 main.go:141] libmachine: (ha-347193-m02) Getting domain xml...
	I0920 17:59:06.339065  256536 main.go:141] libmachine: (ha-347193-m02) Creating domain...
	I0920 17:59:07.590103  256536 main.go:141] libmachine: (ha-347193-m02) Waiting to get IP...
	I0920 17:59:07.591029  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:07.591430  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:07.591465  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:07.591414  256915 retry.go:31] will retry after 226.007564ms: waiting for machine to come up
	I0920 17:59:07.819128  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:07.819593  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:07.819618  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:07.819539  256915 retry.go:31] will retry after 341.961936ms: waiting for machine to come up
	I0920 17:59:08.163271  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:08.163762  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:08.163842  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:08.163725  256915 retry.go:31] will retry after 303.677068ms: waiting for machine to come up
	I0920 17:59:08.469231  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:08.469723  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:08.469751  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:08.469670  256915 retry.go:31] will retry after 590.358913ms: waiting for machine to come up
	I0920 17:59:09.061444  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:09.061930  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:09.061952  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:09.061882  256915 retry.go:31] will retry after 511.282935ms: waiting for machine to come up
	I0920 17:59:09.574742  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:09.575187  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:09.575214  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:09.575124  256915 retry.go:31] will retry after 856.972258ms: waiting for machine to come up
	I0920 17:59:10.434260  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:10.434831  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:10.434853  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:10.434774  256915 retry.go:31] will retry after 836.344709ms: waiting for machine to come up
	I0920 17:59:11.273284  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:11.274041  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:11.274078  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:11.273981  256915 retry.go:31] will retry after 1.355754749s: waiting for machine to come up
	I0920 17:59:12.631596  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:12.631994  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:12.632021  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:12.631955  256915 retry.go:31] will retry after 1.6398171s: waiting for machine to come up
	I0920 17:59:14.273660  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:14.274139  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:14.274166  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:14.274082  256915 retry.go:31] will retry after 2.299234308s: waiting for machine to come up
	I0920 17:59:16.575079  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:16.575516  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:16.575545  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:16.575474  256915 retry.go:31] will retry after 2.142102972s: waiting for machine to come up
	I0920 17:59:18.720889  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:18.721374  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:18.721401  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:18.721344  256915 retry.go:31] will retry after 2.537816732s: waiting for machine to come up
	I0920 17:59:21.261045  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:21.261472  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:21.261500  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:21.261409  256915 retry.go:31] will retry after 3.610609319s: waiting for machine to come up
	I0920 17:59:24.876357  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:24.876860  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:24.876882  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:24.876825  256915 retry.go:31] will retry after 4.700561987s: waiting for machine to come up
	I0920 17:59:29.581568  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.582102  256536 main.go:141] libmachine: (ha-347193-m02) Found IP for machine: 192.168.39.241
	I0920 17:59:29.582125  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has current primary IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.582131  256536 main.go:141] libmachine: (ha-347193-m02) Reserving static IP address...
	I0920 17:59:29.582608  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find host DHCP lease matching {name: "ha-347193-m02", mac: "52:54:00:2a:a9:ec", ip: "192.168.39.241"} in network mk-ha-347193
	I0920 17:59:29.662003  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Getting to WaitForSSH function...
	I0920 17:59:29.662037  256536 main.go:141] libmachine: (ha-347193-m02) Reserved static IP address: 192.168.39.241
	I0920 17:59:29.662058  256536 main.go:141] libmachine: (ha-347193-m02) Waiting for SSH to be available...
	I0920 17:59:29.666033  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.666545  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:29.666582  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.666603  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Using SSH client type: external
	I0920 17:59:29.666618  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa (-rw-------)
	I0920 17:59:29.666652  256536 main.go:141] libmachine: (ha-347193-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:59:29.666668  256536 main.go:141] libmachine: (ha-347193-m02) DBG | About to run SSH command:
	I0920 17:59:29.666675  256536 main.go:141] libmachine: (ha-347193-m02) DBG | exit 0
	I0920 17:59:29.794185  256536 main.go:141] libmachine: (ha-347193-m02) DBG | SSH cmd err, output: <nil>: 
	I0920 17:59:29.794474  256536 main.go:141] libmachine: (ha-347193-m02) KVM machine creation complete!
	I0920 17:59:29.794737  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetConfigRaw
	I0920 17:59:29.795327  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:29.795609  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:29.795784  256536 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:59:29.795797  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetState
	I0920 17:59:29.797225  256536 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:59:29.797243  256536 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:59:29.797249  256536 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:59:29.797255  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:29.799913  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.800263  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:29.800285  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.800414  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:29.800599  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:29.800763  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:29.800897  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:29.801057  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:29.801269  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:29.801282  256536 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:59:29.909222  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:59:29.909246  256536 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:59:29.909255  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:29.912190  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.912743  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:29.912765  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.913023  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:29.913242  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:29.913432  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:29.913591  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:29.913750  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:29.913984  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:29.913999  256536 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:59:30.022466  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:59:30.022546  256536 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:59:30.022558  256536 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:59:30.022572  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetMachineName
	I0920 17:59:30.022864  256536 buildroot.go:166] provisioning hostname "ha-347193-m02"
	I0920 17:59:30.022888  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetMachineName
	I0920 17:59:30.023065  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.025530  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.025878  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.025926  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.026023  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.026228  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.026416  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.026576  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.026730  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:30.026894  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:30.026904  256536 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-347193-m02 && echo "ha-347193-m02" | sudo tee /etc/hostname
	I0920 17:59:30.147982  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-347193-m02
	
	I0920 17:59:30.148028  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.151033  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.151386  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.151409  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.151586  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.151765  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.151945  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.152170  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.152401  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:30.152590  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:30.152607  256536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-347193-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-347193-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-347193-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:59:30.271015  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:59:30.271057  256536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 17:59:30.271078  256536 buildroot.go:174] setting up certificates
	I0920 17:59:30.271087  256536 provision.go:84] configureAuth start
	I0920 17:59:30.271097  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetMachineName
	I0920 17:59:30.271410  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetIP
	I0920 17:59:30.273849  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.274342  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.274365  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.274563  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.277006  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.277328  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.277355  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.277454  256536 provision.go:143] copyHostCerts
	I0920 17:59:30.277493  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 17:59:30.277528  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 17:59:30.277538  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 17:59:30.277621  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 17:59:30.277724  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 17:59:30.277753  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 17:59:30.277763  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 17:59:30.277802  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 17:59:30.277864  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 17:59:30.277886  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 17:59:30.277894  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 17:59:30.277955  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 17:59:30.278028  256536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.ha-347193-m02 san=[127.0.0.1 192.168.39.241 ha-347193-m02 localhost minikube]
	I0920 17:59:30.390911  256536 provision.go:177] copyRemoteCerts
	I0920 17:59:30.390984  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:59:30.391016  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.394282  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.394669  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.394705  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.394848  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.395053  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.395190  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.395311  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa Username:docker}
	I0920 17:59:30.480101  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:59:30.480183  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:59:30.504430  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:59:30.504533  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:59:30.532508  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:59:30.532609  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 17:59:30.555072  256536 provision.go:87] duration metric: took 283.968068ms to configureAuth
	I0920 17:59:30.555106  256536 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:59:30.555298  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:59:30.555382  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.558201  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.558658  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.558688  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.558891  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.559083  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.559260  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.559393  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.559554  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:30.559783  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:30.559809  256536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:59:30.779495  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:59:30.779542  256536 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:59:30.779553  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetURL
	I0920 17:59:30.780879  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Using libvirt version 6000000
	I0920 17:59:30.782959  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.783290  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.783321  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.783453  256536 main.go:141] libmachine: Docker is up and running!
	I0920 17:59:30.783468  256536 main.go:141] libmachine: Reticulating splines...
	I0920 17:59:30.783477  256536 client.go:171] duration metric: took 24.8615738s to LocalClient.Create
	I0920 17:59:30.783506  256536 start.go:167] duration metric: took 24.861646798s to libmachine.API.Create "ha-347193"
	I0920 17:59:30.783518  256536 start.go:293] postStartSetup for "ha-347193-m02" (driver="kvm2")
	I0920 17:59:30.783531  256536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:59:30.783550  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:30.783789  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:59:30.783813  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.786027  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.786349  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.786370  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.786628  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.786815  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.786993  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.787118  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa Username:docker}
	I0920 17:59:30.872345  256536 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:59:30.876519  256536 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:59:30.876550  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 17:59:30.876627  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 17:59:30.876702  256536 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 17:59:30.876712  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /etc/ssl/certs/2448492.pem
	I0920 17:59:30.876794  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:59:30.886441  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 17:59:30.909455  256536 start.go:296] duration metric: took 125.914203ms for postStartSetup
	I0920 17:59:30.909530  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetConfigRaw
	I0920 17:59:30.910141  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetIP
	I0920 17:59:30.912668  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.912976  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.913008  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.913233  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 17:59:30.913434  256536 start.go:128] duration metric: took 25.011535523s to createHost
	I0920 17:59:30.913460  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.915700  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.915987  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.916010  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.916226  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.916424  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.916603  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.916761  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.916950  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:30.917155  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:30.917166  256536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:59:31.026461  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855170.997739673
	
	I0920 17:59:31.026489  256536 fix.go:216] guest clock: 1726855170.997739673
	I0920 17:59:31.026496  256536 fix.go:229] Guest: 2024-09-20 17:59:30.997739673 +0000 UTC Remote: 2024-09-20 17:59:30.913448056 +0000 UTC m=+71.229940404 (delta=84.291617ms)
	I0920 17:59:31.026512  256536 fix.go:200] guest clock delta is within tolerance: 84.291617ms
	I0920 17:59:31.026517  256536 start.go:83] releasing machines lock for "ha-347193-m02", held for 25.124707242s
	I0920 17:59:31.026538  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:31.026839  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetIP
	I0920 17:59:31.029757  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.030179  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:31.030206  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.032445  256536 out.go:177] * Found network options:
	I0920 17:59:31.034196  256536 out.go:177]   - NO_PROXY=192.168.39.246
	W0920 17:59:31.035224  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:59:31.035267  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:31.035792  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:31.035991  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:31.036100  256536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:59:31.036143  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	W0920 17:59:31.036175  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:59:31.036267  256536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:59:31.036294  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:31.039153  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.039466  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.039563  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:31.039596  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.039727  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:31.039878  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:31.039897  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.039909  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:31.040048  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:31.040104  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:31.040219  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa Username:docker}
	I0920 17:59:31.040318  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:31.040480  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:31.040634  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa Username:docker}
	I0920 17:59:31.274255  256536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:59:31.280374  256536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:59:31.280441  256536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:59:31.296955  256536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:59:31.296987  256536 start.go:495] detecting cgroup driver to use...
	I0920 17:59:31.297127  256536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:59:31.313543  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:59:31.328017  256536 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:59:31.328096  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:59:31.341962  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:59:31.355931  256536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:59:31.467597  256536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:59:31.622972  256536 docker.go:233] disabling docker service ...
	I0920 17:59:31.623069  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:59:31.637011  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:59:31.649605  256536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:59:31.771555  256536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:59:31.885423  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:59:31.898889  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:59:31.916477  256536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:59:31.916540  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.926444  256536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:59:31.926525  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.937116  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.947355  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.957415  256536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:59:31.968385  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.979172  256536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.996319  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:32.006541  256536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:59:32.015815  256536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:59:32.015883  256536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:59:32.028240  256536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:59:32.037972  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:59:32.152278  256536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:59:32.246123  256536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:59:32.246218  256536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:59:32.251023  256536 start.go:563] Will wait 60s for crictl version
	I0920 17:59:32.251119  256536 ssh_runner.go:195] Run: which crictl
	I0920 17:59:32.254625  256536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:59:32.289498  256536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:59:32.289579  256536 ssh_runner.go:195] Run: crio --version
	I0920 17:59:32.316659  256536 ssh_runner.go:195] Run: crio --version
	I0920 17:59:32.344869  256536 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:59:32.346085  256536 out.go:177]   - env NO_PROXY=192.168.39.246
	I0920 17:59:32.347420  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetIP
	I0920 17:59:32.350776  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:32.351141  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:32.351172  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:32.351449  256536 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:59:32.355587  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:59:32.367465  256536 mustload.go:65] Loading cluster: ha-347193
	I0920 17:59:32.367713  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:59:32.368030  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:32.368075  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:32.383118  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0920 17:59:32.383676  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:32.384195  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:32.384214  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:32.384600  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:32.384841  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 17:59:32.386464  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 17:59:32.386753  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:32.386789  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:32.402199  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35537
	I0920 17:59:32.402698  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:32.403237  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:32.403260  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:32.403569  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:32.403791  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:59:32.403932  256536 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193 for IP: 192.168.39.241
	I0920 17:59:32.403945  256536 certs.go:194] generating shared ca certs ...
	I0920 17:59:32.403966  256536 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:59:32.404125  256536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 17:59:32.404172  256536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 17:59:32.404185  256536 certs.go:256] generating profile certs ...
	I0920 17:59:32.404277  256536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key
	I0920 17:59:32.404313  256536 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.32ffe274
	I0920 17:59:32.404333  256536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.32ffe274 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.241 192.168.39.254]
	I0920 17:59:32.510440  256536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.32ffe274 ...
	I0920 17:59:32.510475  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.32ffe274: {Name:mkc30548db6e83d8832ed460ef3ecdc3101e5f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:59:32.510691  256536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.32ffe274 ...
	I0920 17:59:32.510711  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.32ffe274: {Name:mk355121b8c4a956d860782a1b0c1370e7e6b83b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:59:32.510815  256536 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.32ffe274 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt
	I0920 17:59:32.510982  256536 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.32ffe274 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key
	I0920 17:59:32.511155  256536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key
	I0920 17:59:32.511179  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:59:32.511194  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:59:32.511205  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:59:32.511220  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:59:32.511234  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:59:32.511253  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:59:32.511269  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:59:32.511287  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:59:32.511357  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 17:59:32.511396  256536 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 17:59:32.511405  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:59:32.511438  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 17:59:32.511471  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:59:32.511501  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 17:59:32.511554  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 17:59:32.511594  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /usr/share/ca-certificates/2448492.pem
	I0920 17:59:32.511618  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:59:32.511638  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem -> /usr/share/ca-certificates/244849.pem
	I0920 17:59:32.511683  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:59:32.515008  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:32.515405  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:59:32.515433  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:32.515642  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:59:32.515847  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:59:32.515999  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:59:32.516117  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:59:32.590305  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 17:59:32.595442  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 17:59:32.607284  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 17:59:32.611399  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0920 17:59:32.622339  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 17:59:32.626371  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 17:59:32.636850  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 17:59:32.640553  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0920 17:59:32.651329  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 17:59:32.655163  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 17:59:32.666449  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 17:59:32.670985  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0920 17:59:32.681916  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:59:32.706099  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:59:32.733293  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:59:32.756993  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 17:59:32.781045  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 17:59:32.804602  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 17:59:32.829390  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:59:32.854727  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:59:32.878575  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 17:59:32.902198  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:59:32.926004  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 17:59:32.950687  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 17:59:32.966783  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0920 17:59:32.982858  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 17:59:32.998897  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0920 17:59:33.015096  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 17:59:33.030999  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0920 17:59:33.046670  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 17:59:33.063118  256536 ssh_runner.go:195] Run: openssl version
	I0920 17:59:33.068899  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 17:59:33.079939  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 17:59:33.084424  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 17:59:33.084485  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 17:59:33.090249  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:59:33.100697  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:59:33.111242  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:59:33.115679  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:59:33.115779  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:59:33.121728  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:59:33.132827  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 17:59:33.144204  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 17:59:33.148909  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 17:59:33.149013  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 17:59:33.155176  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 17:59:33.167680  256536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:59:33.171844  256536 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:59:33.171909  256536 kubeadm.go:934] updating node {m02 192.168.39.241 8443 v1.31.1 crio true true} ...
	I0920 17:59:33.172010  256536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-347193-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:59:33.172048  256536 kube-vip.go:115] generating kube-vip config ...
	I0920 17:59:33.172096  256536 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:59:33.188452  256536 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:59:33.188534  256536 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:59:33.188596  256536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:59:33.200215  256536 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 17:59:33.200283  256536 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 17:59:33.211876  256536 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 17:59:33.211910  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:59:33.211977  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:59:33.211977  256536 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0920 17:59:33.211976  256536 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0920 17:59:33.216444  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 17:59:33.216484  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 17:59:34.138597  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:59:34.138688  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:59:34.143879  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 17:59:34.143926  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 17:59:34.359690  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:59:34.385444  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:59:34.385565  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:59:34.390030  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 17:59:34.390071  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 17:59:34.700597  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 17:59:34.710043  256536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 17:59:34.726628  256536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:59:34.743032  256536 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 17:59:34.758894  256536 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:59:34.762912  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:59:34.775241  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:59:34.903828  256536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:59:34.920877  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 17:59:34.921370  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:34.921427  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:34.936803  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
	I0920 17:59:34.937329  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:34.937858  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:34.937878  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:34.938232  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:34.938485  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:59:34.938651  256536 start.go:317] joinCluster: &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:59:34.938783  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 17:59:34.938806  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:59:34.942213  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:34.942681  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:59:34.942710  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:34.942970  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:59:34.943133  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:59:34.943329  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:59:34.943450  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:59:35.091635  256536 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:59:35.091698  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u7ake0.3opk6636yb6nqfez --discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-347193-m02 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443"
	I0920 17:59:58.407521  256536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u7ake0.3opk6636yb6nqfez --discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-347193-m02 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443": (23.315793188s)
	I0920 17:59:58.407571  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 17:59:58.935865  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-347193-m02 minikube.k8s.io/updated_at=2024_09_20T17_59_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=ha-347193 minikube.k8s.io/primary=false
	I0920 17:59:59.078065  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-347193-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 17:59:59.202785  256536 start.go:319] duration metric: took 24.264127262s to joinCluster
	I0920 17:59:59.202881  256536 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:59:59.203156  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:59:59.204855  256536 out.go:177] * Verifying Kubernetes components...
	I0920 17:59:59.206648  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:59:59.459291  256536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:59:59.534641  256536 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:59:59.534924  256536 kapi.go:59] client config for ha-347193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key", CAFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 17:59:59.534997  256536 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I0920 17:59:59.535231  256536 node_ready.go:35] waiting up to 6m0s for node "ha-347193-m02" to be "Ready" ...
	I0920 17:59:59.535334  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 17:59:59.535343  256536 round_trippers.go:469] Request Headers:
	I0920 17:59:59.535354  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:59:59.535362  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:59:59.550229  256536 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0920 18:00:00.035883  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:00.035909  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:00.035928  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:00.035932  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:00.046596  256536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0920 18:00:00.535658  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:00.535691  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:00.535702  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:00.535709  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:00.541409  256536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:00:01.035971  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:01.036006  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:01.036018  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:01.036024  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:01.040150  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:01.536089  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:01.536113  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:01.536123  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:01.536128  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:01.540239  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:01.540746  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:02.036207  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:02.036234  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:02.036250  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:02.036253  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:02.040514  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:02.535543  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:02.535572  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:02.535585  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:02.535591  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:02.541651  256536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 18:00:03.035563  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:03.035589  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:03.035598  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:03.035606  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:03.039108  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:03.535979  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:03.536001  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:03.536009  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:03.536019  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:03.539926  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:04.035710  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:04.035734  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:04.035743  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:04.035746  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:04.039659  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:04.040156  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:04.535537  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:04.535559  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:04.535572  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:04.535575  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:04.540040  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:05.036185  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:05.036211  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:05.036222  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:05.036229  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:05.040132  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:05.536445  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:05.536515  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:05.536529  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:05.536535  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:05.539954  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:06.036190  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:06.036217  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:06.036228  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:06.036235  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:06.039984  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:06.040529  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:06.535732  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:06.535756  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:06.535765  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:06.535769  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:06.539264  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:07.036241  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:07.036266  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:07.036274  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:07.036278  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:07.040942  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:07.535952  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:07.535977  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:07.535986  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:07.535989  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:07.539355  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:08.036196  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:08.036223  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:08.036231  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:08.036235  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:08.039851  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:08.535561  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:08.535589  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:08.535603  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:08.535609  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:08.540000  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:08.540484  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:09.035653  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:09.035683  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:09.035692  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:09.035695  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:09.039339  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:09.536386  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:09.536410  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:09.536421  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:09.536427  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:09.539675  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:10.036302  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:10.036335  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:10.036347  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:10.036352  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:10.039818  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:10.535749  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:10.535778  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:10.535787  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:10.535792  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:10.539640  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:11.036020  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:11.036050  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:11.036060  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:11.036066  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:11.039525  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:11.040266  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:11.535666  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:11.535691  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:11.535697  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:11.535700  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:11.538988  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:12.036243  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:12.036277  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:12.036285  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:12.036289  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:12.040685  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:12.535894  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:12.535923  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:12.535931  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:12.535936  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:12.539877  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:13.036023  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:13.036052  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:13.036062  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:13.036068  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:13.039752  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:13.040483  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:13.535855  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:13.535883  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:13.535894  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:13.535899  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:13.539399  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:14.036503  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:14.036530  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:14.036539  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:14.036542  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:14.040297  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:14.536446  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:14.536477  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:14.536489  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:14.536496  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:14.539974  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:15.036448  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:15.036478  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:15.036489  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:15.036495  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:15.040620  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:15.041167  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:15.535516  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:15.535545  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:15.535553  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:15.535559  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:15.539083  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:16.036510  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:16.036537  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:16.036546  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:16.036549  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:16.041085  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:16.535826  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:16.535849  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:16.535861  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:16.535865  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:16.539059  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:17.036117  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:17.036144  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:17.036153  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:17.036160  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:17.040478  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:17.535518  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:17.535543  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:17.535552  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:17.535556  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:17.540491  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:17.541065  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:18.035427  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:18.035454  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.035462  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.035467  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.039556  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:18.040741  256536 node_ready.go:49] node "ha-347193-m02" has status "Ready":"True"
	I0920 18:00:18.040773  256536 node_ready.go:38] duration metric: took 18.505523491s for node "ha-347193-m02" to be "Ready" ...
	I0920 18:00:18.040784  256536 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:00:18.040932  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:00:18.040941  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.040957  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.040962  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.046873  256536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:00:18.054373  256536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.054477  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6llmd
	I0920 18:00:18.054485  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.054492  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.054496  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.058597  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:18.060016  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:18.060034  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.060042  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.060047  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.062721  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.063302  256536 pod_ready.go:93] pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.063326  256536 pod_ready.go:82] duration metric: took 8.921017ms for pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.063339  256536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.063419  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bkmhn
	I0920 18:00:18.063429  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.063437  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.063442  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.065673  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.066345  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:18.066361  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.066368  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.066372  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.068535  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.068957  256536 pod_ready.go:93] pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.068975  256536 pod_ready.go:82] duration metric: took 5.629047ms for pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.068985  256536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.069042  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193
	I0920 18:00:18.069050  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.069058  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.069064  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.071215  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.071725  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:18.071741  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.071748  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.071752  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.076248  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:18.076783  256536 pod_ready.go:93] pod "etcd-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.076809  256536 pod_ready.go:82] duration metric: took 7.814986ms for pod "etcd-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.076822  256536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.076903  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193-m02
	I0920 18:00:18.076913  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.076933  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.076942  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.079425  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.080041  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:18.080062  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.080070  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.080073  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.082658  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.083080  256536 pod_ready.go:93] pod "etcd-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.083098  256536 pod_ready.go:82] duration metric: took 6.269137ms for pod "etcd-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.083120  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.235451  256536 request.go:632] Waited for 152.265053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193
	I0920 18:00:18.235515  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193
	I0920 18:00:18.235520  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.235529  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.235538  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.239325  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:18.436436  256536 request.go:632] Waited for 196.38005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:18.436497  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:18.436502  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.436510  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.436513  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.439995  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:18.440920  256536 pod_ready.go:93] pod "kube-apiserver-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.440944  256536 pod_ready.go:82] duration metric: took 357.817605ms for pod "kube-apiserver-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.440954  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.636140  256536 request.go:632] Waited for 195.087959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m02
	I0920 18:00:18.636243  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m02
	I0920 18:00:18.636255  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.636268  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.636280  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.640087  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:18.836246  256536 request.go:632] Waited for 195.361959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:18.836311  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:18.836316  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.836323  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.836328  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.840653  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:18.841777  256536 pod_ready.go:93] pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.841799  256536 pod_ready.go:82] duration metric: took 400.83724ms for pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.841809  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:19.036009  256536 request.go:632] Waited for 194.129324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193
	I0920 18:00:19.036093  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193
	I0920 18:00:19.036098  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:19.036106  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:19.036111  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:19.039737  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:19.236270  256536 request.go:632] Waited for 195.455754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:19.236346  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:19.236354  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:19.236365  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:19.236373  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:19.241800  256536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:00:19.242348  256536 pod_ready.go:93] pod "kube-controller-manager-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:19.242373  256536 pod_ready.go:82] duration metric: took 400.554651ms for pod "kube-controller-manager-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:19.242385  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:19.436357  256536 request.go:632] Waited for 193.884621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m02
	I0920 18:00:19.436449  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m02
	I0920 18:00:19.436463  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:19.436474  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:19.436485  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:19.446510  256536 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0920 18:00:19.635563  256536 request.go:632] Waited for 188.301909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:19.635648  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:19.635653  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:19.635661  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:19.635665  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:19.639157  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:19.639875  256536 pod_ready.go:93] pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:19.639909  256536 pod_ready.go:82] duration metric: took 397.513343ms for pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:19.639925  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ffdvq" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:19.836481  256536 request.go:632] Waited for 196.456867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ffdvq
	I0920 18:00:19.836549  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ffdvq
	I0920 18:00:19.836555  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:19.836563  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:19.836568  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:19.840480  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:20.036151  256536 request.go:632] Waited for 194.863834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:20.036217  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:20.036230  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:20.036238  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:20.036242  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:20.040324  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:20.040897  256536 pod_ready.go:93] pod "kube-proxy-ffdvq" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:20.040926  256536 pod_ready.go:82] duration metric: took 400.990573ms for pod "kube-proxy-ffdvq" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:20.040940  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rdqkg" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:20.235885  256536 request.go:632] Waited for 194.862598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdqkg
	I0920 18:00:20.235966  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdqkg
	I0920 18:00:20.235973  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:20.235983  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:20.235989  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:20.239847  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:20.436319  256536 request.go:632] Waited for 195.461517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:20.436386  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:20.436391  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:20.436399  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:20.436403  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:20.440218  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:20.440901  256536 pod_ready.go:93] pod "kube-proxy-rdqkg" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:20.440935  256536 pod_ready.go:82] duration metric: took 399.983159ms for pod "kube-proxy-rdqkg" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:20.440946  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:20.636078  256536 request.go:632] Waited for 195.028076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193
	I0920 18:00:20.636162  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193
	I0920 18:00:20.636181  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:20.636193  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:20.636206  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:20.639813  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:20.835867  256536 request.go:632] Waited for 195.433474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:20.835962  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:20.835968  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:20.835976  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:20.835982  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:20.839792  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:20.840650  256536 pod_ready.go:93] pod "kube-scheduler-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:20.840681  256536 pod_ready.go:82] duration metric: took 399.725704ms for pod "kube-scheduler-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:20.840695  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:21.036247  256536 request.go:632] Waited for 195.4677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m02
	I0920 18:00:21.036330  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m02
	I0920 18:00:21.036335  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.036344  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.036348  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.040845  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:21.235815  256536 request.go:632] Waited for 194.360469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:21.235904  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:21.235911  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.235921  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.235928  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.239741  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:21.240157  256536 pod_ready.go:93] pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:21.240181  256536 pod_ready.go:82] duration metric: took 399.476235ms for pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:21.240195  256536 pod_ready.go:39] duration metric: took 3.199359276s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:00:21.240216  256536 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:00:21.240276  256536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:00:21.258549  256536 api_server.go:72] duration metric: took 22.055620378s to wait for apiserver process to appear ...
	I0920 18:00:21.258580  256536 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:00:21.258610  256536 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I0920 18:00:21.263626  256536 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I0920 18:00:21.263706  256536 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I0920 18:00:21.263711  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.263719  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.263724  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.265005  256536 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0920 18:00:21.265129  256536 api_server.go:141] control plane version: v1.31.1
	I0920 18:00:21.265148  256536 api_server.go:131] duration metric: took 6.561205ms to wait for apiserver health ...
	I0920 18:00:21.265155  256536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:00:21.435532  256536 request.go:632] Waited for 170.291625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:00:21.435621  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:00:21.435628  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.435636  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.435639  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.442020  256536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 18:00:21.446425  256536 system_pods.go:59] 17 kube-system pods found
	I0920 18:00:21.446458  256536 system_pods.go:61] "coredns-7c65d6cfc9-6llmd" [8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92] Running
	I0920 18:00:21.446463  256536 system_pods.go:61] "coredns-7c65d6cfc9-bkmhn" [f7862a6e-54cc-450c-b283-d20fb99f51ce] Running
	I0920 18:00:21.446467  256536 system_pods.go:61] "etcd-ha-347193" [e13fc198-b02b-4f0a-bf76-be0f519d9d57] Running
	I0920 18:00:21.446470  256536 system_pods.go:61] "etcd-ha-347193-m02" [4ea69953-b35a-4ae9-8153-cea3be5e2c1c] Running
	I0920 18:00:21.446473  256536 system_pods.go:61] "kindnet-cqbxl" [3d49a6b1-5be5-4d96-98e3-bd05035a2d1b] Running
	I0920 18:00:21.446478  256536 system_pods.go:61] "kindnet-z24zp" [9271d251-2d95-4b23-85f3-7da6567b2fc3] Running
	I0920 18:00:21.446482  256536 system_pods.go:61] "kube-apiserver-ha-347193" [993ccf05-a39a-42b4-b82d-936531325dc4] Running
	I0920 18:00:21.446485  256536 system_pods.go:61] "kube-apiserver-ha-347193-m02" [43cd77b8-8925-4a04-a8cf-1b9a0cbbc502] Running
	I0920 18:00:21.446489  256536 system_pods.go:61] "kube-controller-manager-ha-347193" [6de3a14b-6587-45d4-aaee-1256b9c327cc] Running
	I0920 18:00:21.446492  256536 system_pods.go:61] "kube-controller-manager-ha-347193-m02" [cdf3f4d7-0675-4c59-8ad5-8901104d71c3] Running
	I0920 18:00:21.446495  256536 system_pods.go:61] "kube-proxy-ffdvq" [97120f62-0af2-405a-b8ff-639c72a39a2d] Running
	I0920 18:00:21.446500  256536 system_pods.go:61] "kube-proxy-rdqkg" [d9ae4e37-b29b-400a-af2d-544da4024069] Running
	I0920 18:00:21.446502  256536 system_pods.go:61] "kube-scheduler-ha-347193" [910baa0e-404e-4ac7-9262-848672eaf9cf] Running
	I0920 18:00:21.446505  256536 system_pods.go:61] "kube-scheduler-ha-347193-m02" [623b9c3b-b998-4516-a53e-17e9d8970594] Running
	I0920 18:00:21.446508  256536 system_pods.go:61] "kube-vip-ha-347193" [20d6faa4-600f-4bd0-8acb-1f95c047da58] Running
	I0920 18:00:21.446511  256536 system_pods.go:61] "kube-vip-ha-347193-m02" [1455826c-7b3d-40f7-bb15-a9861ee95e19] Running
	I0920 18:00:21.446516  256536 system_pods.go:61] "storage-provisioner" [8924f7ce-85a0-4587-9c05-8a74c7113e9e] Running
	I0920 18:00:21.446521  256536 system_pods.go:74] duration metric: took 181.36053ms to wait for pod list to return data ...
	I0920 18:00:21.446528  256536 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:00:21.636065  256536 request.go:632] Waited for 189.405126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:00:21.636135  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:00:21.636141  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.636148  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.636153  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.640839  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:21.641122  256536 default_sa.go:45] found service account: "default"
	I0920 18:00:21.641142  256536 default_sa.go:55] duration metric: took 194.607217ms for default service account to be created ...
	I0920 18:00:21.641151  256536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:00:21.835580  256536 request.go:632] Waited for 194.337083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:00:21.835675  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:00:21.835682  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.835689  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.835693  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.841225  256536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:00:21.846004  256536 system_pods.go:86] 17 kube-system pods found
	I0920 18:00:21.846039  256536 system_pods.go:89] "coredns-7c65d6cfc9-6llmd" [8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92] Running
	I0920 18:00:21.846046  256536 system_pods.go:89] "coredns-7c65d6cfc9-bkmhn" [f7862a6e-54cc-450c-b283-d20fb99f51ce] Running
	I0920 18:00:21.846051  256536 system_pods.go:89] "etcd-ha-347193" [e13fc198-b02b-4f0a-bf76-be0f519d9d57] Running
	I0920 18:00:21.846055  256536 system_pods.go:89] "etcd-ha-347193-m02" [4ea69953-b35a-4ae9-8153-cea3be5e2c1c] Running
	I0920 18:00:21.846059  256536 system_pods.go:89] "kindnet-cqbxl" [3d49a6b1-5be5-4d96-98e3-bd05035a2d1b] Running
	I0920 18:00:21.846062  256536 system_pods.go:89] "kindnet-z24zp" [9271d251-2d95-4b23-85f3-7da6567b2fc3] Running
	I0920 18:00:21.846066  256536 system_pods.go:89] "kube-apiserver-ha-347193" [993ccf05-a39a-42b4-b82d-936531325dc4] Running
	I0920 18:00:21.846070  256536 system_pods.go:89] "kube-apiserver-ha-347193-m02" [43cd77b8-8925-4a04-a8cf-1b9a0cbbc502] Running
	I0920 18:00:21.846074  256536 system_pods.go:89] "kube-controller-manager-ha-347193" [6de3a14b-6587-45d4-aaee-1256b9c327cc] Running
	I0920 18:00:21.846078  256536 system_pods.go:89] "kube-controller-manager-ha-347193-m02" [cdf3f4d7-0675-4c59-8ad5-8901104d71c3] Running
	I0920 18:00:21.846082  256536 system_pods.go:89] "kube-proxy-ffdvq" [97120f62-0af2-405a-b8ff-639c72a39a2d] Running
	I0920 18:00:21.846085  256536 system_pods.go:89] "kube-proxy-rdqkg" [d9ae4e37-b29b-400a-af2d-544da4024069] Running
	I0920 18:00:21.846089  256536 system_pods.go:89] "kube-scheduler-ha-347193" [910baa0e-404e-4ac7-9262-848672eaf9cf] Running
	I0920 18:00:21.846093  256536 system_pods.go:89] "kube-scheduler-ha-347193-m02" [623b9c3b-b998-4516-a53e-17e9d8970594] Running
	I0920 18:00:21.846097  256536 system_pods.go:89] "kube-vip-ha-347193" [20d6faa4-600f-4bd0-8acb-1f95c047da58] Running
	I0920 18:00:21.846108  256536 system_pods.go:89] "kube-vip-ha-347193-m02" [1455826c-7b3d-40f7-bb15-a9861ee95e19] Running
	I0920 18:00:21.846111  256536 system_pods.go:89] "storage-provisioner" [8924f7ce-85a0-4587-9c05-8a74c7113e9e] Running
	I0920 18:00:21.846118  256536 system_pods.go:126] duration metric: took 204.961033ms to wait for k8s-apps to be running ...
	I0920 18:00:21.846127  256536 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:00:21.846175  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:00:21.862644  256536 system_svc.go:56] duration metric: took 16.499746ms WaitForService to wait for kubelet
	I0920 18:00:21.862683  256536 kubeadm.go:582] duration metric: took 22.659763297s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:00:21.862708  256536 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:00:22.036245  256536 request.go:632] Waited for 173.422886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I0920 18:00:22.036330  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I0920 18:00:22.036338  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:22.036349  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:22.036357  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:22.040138  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:22.040911  256536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:00:22.040940  256536 node_conditions.go:123] node cpu capacity is 2
	I0920 18:00:22.040957  256536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:00:22.040962  256536 node_conditions.go:123] node cpu capacity is 2
	I0920 18:00:22.040967  256536 node_conditions.go:105] duration metric: took 178.253105ms to run NodePressure ...
	I0920 18:00:22.040983  256536 start.go:241] waiting for startup goroutines ...
	I0920 18:00:22.041015  256536 start.go:255] writing updated cluster config ...
	I0920 18:00:22.043512  256536 out.go:201] 
	I0920 18:00:22.045235  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:00:22.045367  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 18:00:22.047395  256536 out.go:177] * Starting "ha-347193-m03" control-plane node in "ha-347193" cluster
	I0920 18:00:22.048977  256536 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:00:22.049012  256536 cache.go:56] Caching tarball of preloaded images
	I0920 18:00:22.049136  256536 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:00:22.049148  256536 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:00:22.049248  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 18:00:22.049435  256536 start.go:360] acquireMachinesLock for ha-347193-m03: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:00:22.049481  256536 start.go:364] duration metric: took 26µs to acquireMachinesLock for "ha-347193-m03"
	I0920 18:00:22.049501  256536 start.go:93] Provisioning new machine with config: &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:00:22.049631  256536 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0920 18:00:22.051727  256536 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:00:22.051867  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:00:22.051912  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:00:22.067720  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40165
	I0920 18:00:22.068325  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:00:22.068884  256536 main.go:141] libmachine: Using API Version  1
	I0920 18:00:22.068907  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:00:22.069270  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:00:22.069481  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetMachineName
	I0920 18:00:22.069638  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:22.069845  256536 start.go:159] libmachine.API.Create for "ha-347193" (driver="kvm2")
	I0920 18:00:22.069873  256536 client.go:168] LocalClient.Create starting
	I0920 18:00:22.069933  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 18:00:22.069978  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 18:00:22.069993  256536 main.go:141] libmachine: Parsing certificate...
	I0920 18:00:22.070053  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 18:00:22.070073  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 18:00:22.070084  256536 main.go:141] libmachine: Parsing certificate...
	I0920 18:00:22.070099  256536 main.go:141] libmachine: Running pre-create checks...
	I0920 18:00:22.070107  256536 main.go:141] libmachine: (ha-347193-m03) Calling .PreCreateCheck
	I0920 18:00:22.070282  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetConfigRaw
	I0920 18:00:22.070730  256536 main.go:141] libmachine: Creating machine...
	I0920 18:00:22.070742  256536 main.go:141] libmachine: (ha-347193-m03) Calling .Create
	I0920 18:00:22.070908  256536 main.go:141] libmachine: (ha-347193-m03) Creating KVM machine...
	I0920 18:00:22.072409  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found existing default KVM network
	I0920 18:00:22.072583  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found existing private KVM network mk-ha-347193
	I0920 18:00:22.072739  256536 main.go:141] libmachine: (ha-347193-m03) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03 ...
	I0920 18:00:22.072765  256536 main.go:141] libmachine: (ha-347193-m03) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:00:22.072834  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:22.072724  257331 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:00:22.072916  256536 main.go:141] libmachine: (ha-347193-m03) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:00:22.338205  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:22.338046  257331 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa...
	I0920 18:00:22.401743  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:22.401600  257331 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/ha-347193-m03.rawdisk...
	I0920 18:00:22.401769  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Writing magic tar header
	I0920 18:00:22.401826  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Writing SSH key tar header
	I0920 18:00:22.401856  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:22.401719  257331 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03 ...
	I0920 18:00:22.401875  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03 (perms=drwx------)
	I0920 18:00:22.401895  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:00:22.401963  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 18:00:22.401981  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03
	I0920 18:00:22.401996  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 18:00:22.402006  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:00:22.402019  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 18:00:22.402031  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 18:00:22.402043  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:00:22.402054  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:00:22.402064  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home
	I0920 18:00:22.402077  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Skipping /home - not owner
	I0920 18:00:22.402112  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:00:22.402132  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:00:22.402145  256536 main.go:141] libmachine: (ha-347193-m03) Creating domain...
	I0920 18:00:22.403163  256536 main.go:141] libmachine: (ha-347193-m03) define libvirt domain using xml: 
	I0920 18:00:22.403182  256536 main.go:141] libmachine: (ha-347193-m03) <domain type='kvm'>
	I0920 18:00:22.403192  256536 main.go:141] libmachine: (ha-347193-m03)   <name>ha-347193-m03</name>
	I0920 18:00:22.403198  256536 main.go:141] libmachine: (ha-347193-m03)   <memory unit='MiB'>2200</memory>
	I0920 18:00:22.403205  256536 main.go:141] libmachine: (ha-347193-m03)   <vcpu>2</vcpu>
	I0920 18:00:22.403215  256536 main.go:141] libmachine: (ha-347193-m03)   <features>
	I0920 18:00:22.403225  256536 main.go:141] libmachine: (ha-347193-m03)     <acpi/>
	I0920 18:00:22.403233  256536 main.go:141] libmachine: (ha-347193-m03)     <apic/>
	I0920 18:00:22.403245  256536 main.go:141] libmachine: (ha-347193-m03)     <pae/>
	I0920 18:00:22.403253  256536 main.go:141] libmachine: (ha-347193-m03)     
	I0920 18:00:22.403263  256536 main.go:141] libmachine: (ha-347193-m03)   </features>
	I0920 18:00:22.403273  256536 main.go:141] libmachine: (ha-347193-m03)   <cpu mode='host-passthrough'>
	I0920 18:00:22.403286  256536 main.go:141] libmachine: (ha-347193-m03)   
	I0920 18:00:22.403296  256536 main.go:141] libmachine: (ha-347193-m03)   </cpu>
	I0920 18:00:22.403305  256536 main.go:141] libmachine: (ha-347193-m03)   <os>
	I0920 18:00:22.403315  256536 main.go:141] libmachine: (ha-347193-m03)     <type>hvm</type>
	I0920 18:00:22.403326  256536 main.go:141] libmachine: (ha-347193-m03)     <boot dev='cdrom'/>
	I0920 18:00:22.403336  256536 main.go:141] libmachine: (ha-347193-m03)     <boot dev='hd'/>
	I0920 18:00:22.403346  256536 main.go:141] libmachine: (ha-347193-m03)     <bootmenu enable='no'/>
	I0920 18:00:22.403355  256536 main.go:141] libmachine: (ha-347193-m03)   </os>
	I0920 18:00:22.403364  256536 main.go:141] libmachine: (ha-347193-m03)   <devices>
	I0920 18:00:22.403375  256536 main.go:141] libmachine: (ha-347193-m03)     <disk type='file' device='cdrom'>
	I0920 18:00:22.403406  256536 main.go:141] libmachine: (ha-347193-m03)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/boot2docker.iso'/>
	I0920 18:00:22.403432  256536 main.go:141] libmachine: (ha-347193-m03)       <target dev='hdc' bus='scsi'/>
	I0920 18:00:22.403442  256536 main.go:141] libmachine: (ha-347193-m03)       <readonly/>
	I0920 18:00:22.403452  256536 main.go:141] libmachine: (ha-347193-m03)     </disk>
	I0920 18:00:22.403465  256536 main.go:141] libmachine: (ha-347193-m03)     <disk type='file' device='disk'>
	I0920 18:00:22.403477  256536 main.go:141] libmachine: (ha-347193-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:00:22.403493  256536 main.go:141] libmachine: (ha-347193-m03)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/ha-347193-m03.rawdisk'/>
	I0920 18:00:22.403506  256536 main.go:141] libmachine: (ha-347193-m03)       <target dev='hda' bus='virtio'/>
	I0920 18:00:22.403515  256536 main.go:141] libmachine: (ha-347193-m03)     </disk>
	I0920 18:00:22.403522  256536 main.go:141] libmachine: (ha-347193-m03)     <interface type='network'>
	I0920 18:00:22.403530  256536 main.go:141] libmachine: (ha-347193-m03)       <source network='mk-ha-347193'/>
	I0920 18:00:22.403537  256536 main.go:141] libmachine: (ha-347193-m03)       <model type='virtio'/>
	I0920 18:00:22.403545  256536 main.go:141] libmachine: (ha-347193-m03)     </interface>
	I0920 18:00:22.403554  256536 main.go:141] libmachine: (ha-347193-m03)     <interface type='network'>
	I0920 18:00:22.403563  256536 main.go:141] libmachine: (ha-347193-m03)       <source network='default'/>
	I0920 18:00:22.403572  256536 main.go:141] libmachine: (ha-347193-m03)       <model type='virtio'/>
	I0920 18:00:22.403580  256536 main.go:141] libmachine: (ha-347193-m03)     </interface>
	I0920 18:00:22.403598  256536 main.go:141] libmachine: (ha-347193-m03)     <serial type='pty'>
	I0920 18:00:22.403608  256536 main.go:141] libmachine: (ha-347193-m03)       <target port='0'/>
	I0920 18:00:22.403614  256536 main.go:141] libmachine: (ha-347193-m03)     </serial>
	I0920 18:00:22.403626  256536 main.go:141] libmachine: (ha-347193-m03)     <console type='pty'>
	I0920 18:00:22.403638  256536 main.go:141] libmachine: (ha-347193-m03)       <target type='serial' port='0'/>
	I0920 18:00:22.403648  256536 main.go:141] libmachine: (ha-347193-m03)     </console>
	I0920 18:00:22.403655  256536 main.go:141] libmachine: (ha-347193-m03)     <rng model='virtio'>
	I0920 18:00:22.403665  256536 main.go:141] libmachine: (ha-347193-m03)       <backend model='random'>/dev/random</backend>
	I0920 18:00:22.403669  256536 main.go:141] libmachine: (ha-347193-m03)     </rng>
	I0920 18:00:22.403674  256536 main.go:141] libmachine: (ha-347193-m03)     
	I0920 18:00:22.403680  256536 main.go:141] libmachine: (ha-347193-m03)     
	I0920 18:00:22.403685  256536 main.go:141] libmachine: (ha-347193-m03)   </devices>
	I0920 18:00:22.403691  256536 main.go:141] libmachine: (ha-347193-m03) </domain>
	I0920 18:00:22.403701  256536 main.go:141] libmachine: (ha-347193-m03) 
	I0920 18:00:22.411929  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:7f:8c:82 in network default
	I0920 18:00:22.412669  256536 main.go:141] libmachine: (ha-347193-m03) Ensuring networks are active...
	I0920 18:00:22.412689  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:22.413649  256536 main.go:141] libmachine: (ha-347193-m03) Ensuring network default is active
	I0920 18:00:22.414029  256536 main.go:141] libmachine: (ha-347193-m03) Ensuring network mk-ha-347193 is active
	I0920 18:00:22.414605  256536 main.go:141] libmachine: (ha-347193-m03) Getting domain xml...
	I0920 18:00:22.415371  256536 main.go:141] libmachine: (ha-347193-m03) Creating domain...
	I0920 18:00:23.690471  256536 main.go:141] libmachine: (ha-347193-m03) Waiting to get IP...
	I0920 18:00:23.691341  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:23.691801  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:23.691826  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:23.691771  257331 retry.go:31] will retry after 305.28803ms: waiting for machine to come up
	I0920 18:00:23.998411  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:23.999018  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:23.999037  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:23.998982  257331 retry.go:31] will retry after 325.282486ms: waiting for machine to come up
	I0920 18:00:24.325459  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:24.326038  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:24.326064  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:24.325997  257331 retry.go:31] will retry after 443.699467ms: waiting for machine to come up
	I0920 18:00:24.771839  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:24.772332  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:24.772360  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:24.772272  257331 retry.go:31] will retry after 425.456586ms: waiting for machine to come up
	I0920 18:00:25.199046  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:25.199733  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:25.199762  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:25.199691  257331 retry.go:31] will retry after 471.75067ms: waiting for machine to come up
	I0920 18:00:25.673494  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:25.674017  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:25.674046  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:25.673921  257331 retry.go:31] will retry after 587.223627ms: waiting for machine to come up
	I0920 18:00:26.262671  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:26.263313  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:26.263345  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:26.263252  257331 retry.go:31] will retry after 883.317566ms: waiting for machine to come up
	I0920 18:00:27.148800  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:27.149230  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:27.149252  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:27.149182  257331 retry.go:31] will retry after 1.299880509s: waiting for machine to come up
	I0920 18:00:28.450607  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:28.451213  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:28.451237  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:28.451146  257331 retry.go:31] will retry after 1.154105376s: waiting for machine to come up
	I0920 18:00:29.607236  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:29.607729  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:29.607762  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:29.607684  257331 retry.go:31] will retry after 1.399507975s: waiting for machine to come up
	I0920 18:00:31.009117  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:31.009614  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:31.009645  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:31.009556  257331 retry.go:31] will retry after 2.255483173s: waiting for machine to come up
	I0920 18:00:33.266732  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:33.267250  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:33.267280  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:33.267181  257331 retry.go:31] will retry after 3.331108113s: waiting for machine to come up
	I0920 18:00:36.602825  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:36.603313  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:36.603336  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:36.603267  257331 retry.go:31] will retry after 4.086437861s: waiting for machine to come up
	I0920 18:00:40.692990  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:40.693433  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:40.693462  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:40.693375  257331 retry.go:31] will retry after 5.025372778s: waiting for machine to come up
	I0920 18:00:45.723079  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.723614  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has current primary IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.723644  256536 main.go:141] libmachine: (ha-347193-m03) Found IP for machine: 192.168.39.250
	I0920 18:00:45.723658  256536 main.go:141] libmachine: (ha-347193-m03) Reserving static IP address...
	I0920 18:00:45.724041  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find host DHCP lease matching {name: "ha-347193-m03", mac: "52:54:00:80:1a:4c", ip: "192.168.39.250"} in network mk-ha-347193
	I0920 18:00:45.808270  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Getting to WaitForSSH function...
	I0920 18:00:45.808305  256536 main.go:141] libmachine: (ha-347193-m03) Reserved static IP address: 192.168.39.250
	I0920 18:00:45.808317  256536 main.go:141] libmachine: (ha-347193-m03) Waiting for SSH to be available...
	I0920 18:00:45.811196  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.811660  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:45.811697  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.811825  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Using SSH client type: external
	I0920 18:00:45.811848  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa (-rw-------)
	I0920 18:00:45.811941  256536 main.go:141] libmachine: (ha-347193-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:00:45.811975  256536 main.go:141] libmachine: (ha-347193-m03) DBG | About to run SSH command:
	I0920 18:00:45.811991  256536 main.go:141] libmachine: (ha-347193-m03) DBG | exit 0
	I0920 18:00:45.942448  256536 main.go:141] libmachine: (ha-347193-m03) DBG | SSH cmd err, output: <nil>: 
	I0920 18:00:45.942757  256536 main.go:141] libmachine: (ha-347193-m03) KVM machine creation complete!
	I0920 18:00:45.943036  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetConfigRaw
	I0920 18:00:45.943611  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:45.943802  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:45.943956  256536 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:00:45.943968  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetState
	I0920 18:00:45.945108  256536 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:00:45.945127  256536 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:00:45.945134  256536 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:00:45.945143  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:45.947795  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.948180  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:45.948212  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.948362  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:45.948540  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:45.948731  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:45.948909  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:45.949088  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:45.949376  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:45.949397  256536 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:00:46.053564  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:00:46.053620  256536 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:00:46.053632  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.056590  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.057022  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.057055  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.057256  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:46.057474  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.057655  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.057801  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:46.058159  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:46.058349  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:46.058359  256536 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:00:46.162650  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:00:46.162739  256536 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:00:46.162750  256536 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:00:46.162759  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetMachineName
	I0920 18:00:46.163059  256536 buildroot.go:166] provisioning hostname "ha-347193-m03"
	I0920 18:00:46.163088  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetMachineName
	I0920 18:00:46.163316  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.166267  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.166667  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.166690  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.166891  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:46.167092  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.167331  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.167501  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:46.167710  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:46.167873  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:46.167885  256536 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-347193-m03 && echo "ha-347193-m03" | sudo tee /etc/hostname
	I0920 18:00:46.284161  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-347193-m03
	
	I0920 18:00:46.284194  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.287604  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.288162  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.288212  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.288377  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:46.288598  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.288781  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.288997  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:46.289164  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:46.289333  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:46.289348  256536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-347193-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-347193-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-347193-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:00:46.403249  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:00:46.403284  256536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 18:00:46.403312  256536 buildroot.go:174] setting up certificates
	I0920 18:00:46.403323  256536 provision.go:84] configureAuth start
	I0920 18:00:46.403334  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetMachineName
	I0920 18:00:46.403661  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetIP
	I0920 18:00:46.407072  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.407456  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.407507  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.407605  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.410105  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.410437  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.410474  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.410693  256536 provision.go:143] copyHostCerts
	I0920 18:00:46.410731  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:00:46.410776  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 18:00:46.410788  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:00:46.410872  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 18:00:46.410969  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:00:46.410999  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 18:00:46.411009  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:00:46.411048  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 18:00:46.411112  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:00:46.411134  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 18:00:46.411141  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:00:46.411174  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 18:00:46.411245  256536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.ha-347193-m03 san=[127.0.0.1 192.168.39.250 ha-347193-m03 localhost minikube]
	I0920 18:00:46.589496  256536 provision.go:177] copyRemoteCerts
	I0920 18:00:46.589576  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:00:46.589611  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.592753  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.593174  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.593204  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.593452  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:46.593684  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.593864  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:46.594009  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa Username:docker}
	I0920 18:00:46.676664  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:00:46.676774  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:00:46.702866  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:00:46.702960  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:00:46.728033  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:00:46.728125  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:00:46.752902  256536 provision.go:87] duration metric: took 349.552078ms to configureAuth
	I0920 18:00:46.752934  256536 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:00:46.753136  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:00:46.753210  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.755906  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.756375  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.756398  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.756668  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:46.756899  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.757160  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.757332  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:46.757510  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:46.757706  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:46.757726  256536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:00:46.996420  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:00:46.996456  256536 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:00:46.996468  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetURL
	I0920 18:00:46.998173  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Using libvirt version 6000000
	I0920 18:00:47.000536  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.000948  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.001005  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.001175  256536 main.go:141] libmachine: Docker is up and running!
	I0920 18:00:47.001193  256536 main.go:141] libmachine: Reticulating splines...
	I0920 18:00:47.001204  256536 client.go:171] duration metric: took 24.931317889s to LocalClient.Create
	I0920 18:00:47.001232  256536 start.go:167] duration metric: took 24.931386973s to libmachine.API.Create "ha-347193"
	I0920 18:00:47.001245  256536 start.go:293] postStartSetup for "ha-347193-m03" (driver="kvm2")
	I0920 18:00:47.001262  256536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:00:47.001288  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:47.001582  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:00:47.001615  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:47.005636  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.006217  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.006249  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.006471  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:47.006730  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:47.006897  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:47.007131  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa Username:docker}
	I0920 18:00:47.088575  256536 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:00:47.093116  256536 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:00:47.093144  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 18:00:47.093215  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 18:00:47.093286  256536 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 18:00:47.093296  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /etc/ssl/certs/2448492.pem
	I0920 18:00:47.093380  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:00:47.103343  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:00:47.129139  256536 start.go:296] duration metric: took 127.87289ms for postStartSetup
	I0920 18:00:47.129196  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetConfigRaw
	I0920 18:00:47.129896  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetIP
	I0920 18:00:47.132942  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.133411  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.133437  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.133773  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 18:00:47.134091  256536 start.go:128] duration metric: took 25.084442035s to createHost
	I0920 18:00:47.134127  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:47.136774  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.137134  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.137159  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.137348  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:47.137616  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:47.137786  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:47.137992  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:47.138197  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:47.138375  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:47.138386  256536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:00:47.242925  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855247.221790500
	
	I0920 18:00:47.242952  256536 fix.go:216] guest clock: 1726855247.221790500
	I0920 18:00:47.242962  256536 fix.go:229] Guest: 2024-09-20 18:00:47.2217905 +0000 UTC Remote: 2024-09-20 18:00:47.134109422 +0000 UTC m=+147.450601767 (delta=87.681078ms)
	I0920 18:00:47.242983  256536 fix.go:200] guest clock delta is within tolerance: 87.681078ms
	I0920 18:00:47.242988  256536 start.go:83] releasing machines lock for "ha-347193-m03", held for 25.193498164s
	I0920 18:00:47.243006  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:47.243300  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetIP
	I0920 18:00:47.246354  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.246809  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.246844  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.249405  256536 out.go:177] * Found network options:
	I0920 18:00:47.251083  256536 out.go:177]   - NO_PROXY=192.168.39.246,192.168.39.241
	W0920 18:00:47.252536  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 18:00:47.252563  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:00:47.252582  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:47.253272  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:47.253546  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:47.253662  256536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:00:47.253727  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	W0920 18:00:47.253771  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 18:00:47.253799  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:00:47.253880  256536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:00:47.253928  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:47.256829  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.256923  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.257208  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.257233  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.257309  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.257347  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.257407  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:47.257616  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:47.257619  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:47.257870  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:47.257875  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:47.258038  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa Username:docker}
	I0920 18:00:47.258107  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:47.258329  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa Username:docker}
	I0920 18:00:47.495115  256536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:00:47.501076  256536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:00:47.501151  256536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:00:47.517330  256536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:00:47.517360  256536 start.go:495] detecting cgroup driver to use...
	I0920 18:00:47.517421  256536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:00:47.534608  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:00:47.549798  256536 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:00:47.549868  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:00:47.564991  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:00:47.580654  256536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:00:47.705785  256536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:00:47.870467  256536 docker.go:233] disabling docker service ...
	I0920 18:00:47.870543  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:00:47.889659  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:00:47.904008  256536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:00:48.037069  256536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:00:48.172437  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:00:48.186077  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:00:48.205661  256536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:00:48.205724  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.216421  256536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:00:48.216509  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.228291  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.239306  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.249763  256536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:00:48.260784  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.271597  256536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.290072  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.301232  256536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:00:48.311548  256536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:00:48.311624  256536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:00:48.327406  256536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:00:48.338454  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:00:48.463827  256536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:00:48.563927  256536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:00:48.564016  256536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:00:48.569050  256536 start.go:563] Will wait 60s for crictl version
	I0920 18:00:48.569137  256536 ssh_runner.go:195] Run: which crictl
	I0920 18:00:48.573089  256536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:00:48.612882  256536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:00:48.612989  256536 ssh_runner.go:195] Run: crio --version
	I0920 18:00:48.641884  256536 ssh_runner.go:195] Run: crio --version
	I0920 18:00:48.674772  256536 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:00:48.676208  256536 out.go:177]   - env NO_PROXY=192.168.39.246
	I0920 18:00:48.677575  256536 out.go:177]   - env NO_PROXY=192.168.39.246,192.168.39.241
	I0920 18:00:48.679175  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetIP
	I0920 18:00:48.682184  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:48.682668  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:48.682700  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:48.682899  256536 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:00:48.687203  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:00:48.700132  256536 mustload.go:65] Loading cluster: ha-347193
	I0920 18:00:48.700432  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:00:48.700738  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:00:48.700780  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:00:48.718208  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I0920 18:00:48.718740  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:00:48.719373  256536 main.go:141] libmachine: Using API Version  1
	I0920 18:00:48.719397  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:00:48.719797  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:00:48.720025  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 18:00:48.722026  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 18:00:48.722319  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:00:48.722366  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:00:48.738476  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42131
	I0920 18:00:48.739047  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:00:48.739709  256536 main.go:141] libmachine: Using API Version  1
	I0920 18:00:48.739737  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:00:48.740150  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:00:48.740408  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:00:48.740641  256536 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193 for IP: 192.168.39.250
	I0920 18:00:48.740657  256536 certs.go:194] generating shared ca certs ...
	I0920 18:00:48.740678  256536 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:00:48.740861  256536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 18:00:48.740924  256536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 18:00:48.740938  256536 certs.go:256] generating profile certs ...
	I0920 18:00:48.741049  256536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key
	I0920 18:00:48.741086  256536 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.071b5fb5
	I0920 18:00:48.741106  256536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.071b5fb5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.241 192.168.39.250 192.168.39.254]
	I0920 18:00:48.849787  256536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.071b5fb5 ...
	I0920 18:00:48.849825  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.071b5fb5: {Name:mk94b8924122fda4caf4db9161420b6f420a2437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:00:48.850030  256536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.071b5fb5 ...
	I0920 18:00:48.850042  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.071b5fb5: {Name:mk6d1c5532994e70c91ba359922d7d11837270cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:00:48.850120  256536 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.071b5fb5 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt
	I0920 18:00:48.850256  256536 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.071b5fb5 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key
	I0920 18:00:48.850383  256536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key
	I0920 18:00:48.850401  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:00:48.850413  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:00:48.850425  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:00:48.850434  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:00:48.850447  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:00:48.850458  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:00:48.850472  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:00:48.866055  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:00:48.866157  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 18:00:48.866197  256536 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 18:00:48.866207  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:00:48.866228  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:00:48.866250  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:00:48.866268  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 18:00:48.866305  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:00:48.866332  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:00:48.866346  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem -> /usr/share/ca-certificates/244849.pem
	I0920 18:00:48.866361  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /usr/share/ca-certificates/2448492.pem
	I0920 18:00:48.866398  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:00:48.869320  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:00:48.869797  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:00:48.869831  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:00:48.870003  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:00:48.870250  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:00:48.870392  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:00:48.870532  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 18:00:48.946355  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 18:00:48.951957  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 18:00:48.963708  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 18:00:48.968268  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0920 18:00:48.979656  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 18:00:48.983832  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 18:00:48.995975  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 18:00:48.999924  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0920 18:00:49.010455  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 18:00:49.014784  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 18:00:49.025741  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 18:00:49.030881  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0920 18:00:49.042858  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:00:49.071216  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:00:49.096135  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:00:49.120994  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:00:49.146256  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0920 18:00:49.170936  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:00:49.195738  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:00:49.219660  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:00:49.243873  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:00:49.268501  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 18:00:49.293119  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 18:00:49.317663  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 18:00:49.336046  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0920 18:00:49.352794  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 18:00:49.370728  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0920 18:00:49.388727  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 18:00:49.406268  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0920 18:00:49.422685  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 18:00:49.439002  256536 ssh_runner.go:195] Run: openssl version
	I0920 18:00:49.444882  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:00:49.456482  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:00:49.461403  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:00:49.461480  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:00:49.470070  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:00:49.481997  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 18:00:49.496420  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 18:00:49.501453  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:00:49.501530  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 18:00:49.508441  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 18:00:49.521740  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 18:00:49.535641  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 18:00:49.541368  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:00:49.541431  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 18:00:49.547775  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:00:49.559535  256536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:00:49.563545  256536 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:00:49.563612  256536 kubeadm.go:934] updating node {m03 192.168.39.250 8443 v1.31.1 crio true true} ...
	I0920 18:00:49.563727  256536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-347193-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:00:49.563772  256536 kube-vip.go:115] generating kube-vip config ...
	I0920 18:00:49.563822  256536 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:00:49.580897  256536 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:00:49.580978  256536 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:00:49.581038  256536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:00:49.590566  256536 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 18:00:49.590695  256536 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 18:00:49.600047  256536 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 18:00:49.600048  256536 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 18:00:49.600092  256536 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 18:00:49.600085  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:00:49.600108  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:00:49.600145  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:00:49.600623  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:00:49.600694  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:00:49.606126  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 18:00:49.606169  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 18:00:49.632538  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:00:49.632673  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:00:49.632669  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 18:00:49.632772  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 18:00:49.675110  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 18:00:49.675165  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 18:00:50.517293  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 18:00:50.527931  256536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 18:00:50.545163  256536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:00:50.562804  256536 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:00:50.579873  256536 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:00:50.583899  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:00:50.595871  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:00:50.727492  256536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:00:50.746998  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 18:00:50.747552  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:00:50.747621  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:00:50.764998  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35973
	I0920 18:00:50.765568  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:00:50.766259  256536 main.go:141] libmachine: Using API Version  1
	I0920 18:00:50.766285  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:00:50.766697  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:00:50.766924  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:00:50.767151  256536 start.go:317] joinCluster: &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:00:50.767302  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 18:00:50.767319  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:00:50.770123  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:00:50.770554  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:00:50.770590  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:00:50.770696  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:00:50.770948  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:00:50.771120  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:00:50.771276  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 18:00:50.937328  256536 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:00:50.937401  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token w3dh0u.en8aqh39le5u0uln --discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-347193-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443"
	I0920 18:01:13.927196  256536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token w3dh0u.en8aqh39le5u0uln --discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-347193-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443": (22.989760407s)
	I0920 18:01:13.927243  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 18:01:14.543516  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-347193-m03 minikube.k8s.io/updated_at=2024_09_20T18_01_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=ha-347193 minikube.k8s.io/primary=false
	I0920 18:01:14.679099  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-347193-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 18:01:14.820428  256536 start.go:319] duration metric: took 24.053268109s to joinCluster
	I0920 18:01:14.820517  256536 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:01:14.820875  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:01:14.822533  256536 out.go:177] * Verifying Kubernetes components...
	I0920 18:01:14.823874  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:01:15.125787  256536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:01:15.183134  256536 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:01:15.183424  256536 kapi.go:59] client config for ha-347193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key", CAFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 18:01:15.183503  256536 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I0920 18:01:15.183888  256536 node_ready.go:35] waiting up to 6m0s for node "ha-347193-m03" to be "Ready" ...
	I0920 18:01:15.184021  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:15.184034  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:15.184045  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:15.184057  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:15.188812  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:15.684732  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:15.684762  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:15.684773  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:15.684779  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:15.688455  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:16.184249  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:16.184278  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:16.184290  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:16.184296  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:16.188149  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:16.684238  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:16.684266  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:16.684276  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:16.684280  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:16.688135  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:17.184574  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:17.184605  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:17.184616  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:17.184622  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:17.188720  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:17.189742  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:17.684157  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:17.684188  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:17.684200  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:17.684205  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:17.687993  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:18.184987  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:18.185016  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:18.185027  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:18.185033  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:18.188436  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:18.684240  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:18.684263  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:18.684270  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:18.684274  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:18.688063  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:19.184814  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:19.184846  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:19.184859  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:19.184868  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:19.189842  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:19.190448  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:19.684861  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:19.684890  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:19.684901  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:19.684908  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:19.688056  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:20.184157  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:20.184183  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:20.184192  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:20.184196  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:20.190785  256536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 18:01:20.684195  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:20.684230  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:20.684241  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:20.684245  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:20.688027  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:21.185183  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:21.185207  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:21.185216  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:21.185221  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:21.188774  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:21.684314  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:21.684338  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:21.684350  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:21.684355  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:21.687635  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:21.688202  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:22.185048  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:22.185073  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:22.185084  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:22.185089  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:22.188754  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:22.684520  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:22.684570  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:22.684579  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:22.684584  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:22.688376  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:23.184575  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:23.184600  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:23.184608  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:23.184612  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:23.189052  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:23.684932  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:23.684955  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:23.684965  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:23.684968  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:23.688597  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:23.689108  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:24.184308  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:24.184334  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:24.184344  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:24.184350  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:24.188092  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:24.684218  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:24.684252  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:24.684261  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:24.684264  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:24.688018  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:25.184193  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:25.184221  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:25.184232  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:25.184237  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:25.188243  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:25.684786  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:25.684818  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:25.684830  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:25.684837  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:25.687395  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:01:26.184220  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:26.184255  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:26.184270  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:26.184273  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:26.188544  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:26.189181  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:26.684404  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:26.684432  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:26.684445  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:26.684452  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:26.688821  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:27.184155  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:27.184182  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:27.184191  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:27.184194  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:27.187676  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:27.684611  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:27.684643  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:27.684651  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:27.684654  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:27.688751  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:28.184312  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:28.184339  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:28.184347  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:28.184350  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:28.188272  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:28.684161  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:28.684200  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:28.684208  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:28.684212  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:28.687898  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:28.688502  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:29.184527  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:29.184554  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:29.184563  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:29.184570  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:29.188227  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:29.685118  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:29.685147  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:29.685157  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:29.685159  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:29.689095  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:30.184672  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:30.184697  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:30.184705  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:30.184709  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:30.188058  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:30.685162  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:30.685189  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:30.685200  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:30.685206  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:30.688686  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:30.689119  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:31.184362  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:31.184388  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:31.184397  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:31.184401  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:31.188508  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:31.684348  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:31.684374  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:31.684382  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:31.684388  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:31.688113  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:32.184592  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:32.184620  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.184629  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.184633  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.188695  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:32.684894  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:32.684920  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.684929  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.684933  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.688521  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:32.689073  256536 node_ready.go:49] node "ha-347193-m03" has status "Ready":"True"
	I0920 18:01:32.689098  256536 node_ready.go:38] duration metric: took 17.505173835s for node "ha-347193-m03" to be "Ready" ...
	I0920 18:01:32.689108  256536 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:01:32.689179  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:01:32.689189  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.689196  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.689200  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.713301  256536 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0920 18:01:32.721489  256536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.721627  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6llmd
	I0920 18:01:32.721638  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.721649  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.721660  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.731687  256536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0920 18:01:32.732373  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:32.732393  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.732404  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.732410  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.740976  256536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 18:01:32.741470  256536 pod_ready.go:93] pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:32.741487  256536 pod_ready.go:82] duration metric: took 19.962818ms for pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.741496  256536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.741558  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bkmhn
	I0920 18:01:32.741564  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.741572  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.741578  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.754720  256536 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0920 18:01:32.755448  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:32.755463  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.755471  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.755475  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.764627  256536 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0920 18:01:32.765312  256536 pod_ready.go:93] pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:32.765342  256536 pod_ready.go:82] duration metric: took 23.838489ms for pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.765357  256536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.765462  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193
	I0920 18:01:32.765474  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.765484  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.765492  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.774103  256536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 18:01:32.774830  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:32.774850  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.774858  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.774861  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.777561  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:01:32.778082  256536 pod_ready.go:93] pod "etcd-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:32.778110  256536 pod_ready.go:82] duration metric: took 12.744363ms for pod "etcd-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.778122  256536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.778202  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193-m02
	I0920 18:01:32.778213  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.778225  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.778234  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.781035  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:01:32.781896  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:32.781933  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.781945  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.781950  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.784612  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:01:32.785026  256536 pod_ready.go:93] pod "etcd-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:32.785044  256536 pod_ready.go:82] duration metric: took 6.912479ms for pod "etcd-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.785057  256536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.885398  256536 request.go:632] Waited for 100.268978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193-m03
	I0920 18:01:32.885496  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193-m03
	I0920 18:01:32.885505  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.885513  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.885520  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.889795  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:33.084880  256536 request.go:632] Waited for 194.30681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:33.084946  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:33.084952  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:33.084960  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:33.084964  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:33.088321  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:33.088961  256536 pod_ready.go:93] pod "etcd-ha-347193-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:33.088982  256536 pod_ready.go:82] duration metric: took 303.916513ms for pod "etcd-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:33.089001  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:33.285463  256536 request.go:632] Waited for 196.366216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193
	I0920 18:01:33.285538  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193
	I0920 18:01:33.285544  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:33.285553  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:33.285557  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:33.289153  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:33.485283  256536 request.go:632] Waited for 195.396109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:33.485343  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:33.485349  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:33.485363  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:33.485368  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:33.488640  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:33.489171  256536 pod_ready.go:93] pod "kube-apiserver-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:33.489194  256536 pod_ready.go:82] duration metric: took 400.186326ms for pod "kube-apiserver-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:33.489203  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:33.685381  256536 request.go:632] Waited for 196.09905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m02
	I0920 18:01:33.685495  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m02
	I0920 18:01:33.685509  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:33.685526  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:33.685534  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:33.689644  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:33.885477  256536 request.go:632] Waited for 194.996096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:33.885557  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:33.885565  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:33.885575  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:33.885584  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:33.888804  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:33.889531  256536 pod_ready.go:93] pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:33.889552  256536 pod_ready.go:82] duration metric: took 400.342117ms for pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:33.889562  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:34.085670  256536 request.go:632] Waited for 196.018178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m03
	I0920 18:01:34.085746  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m03
	I0920 18:01:34.085754  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:34.085766  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:34.085774  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:34.089521  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:34.285667  256536 request.go:632] Waited for 195.397565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:34.285731  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:34.285736  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:34.285744  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:34.285747  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:34.289576  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:34.290194  256536 pod_ready.go:93] pod "kube-apiserver-ha-347193-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:34.290225  256536 pod_ready.go:82] duration metric: took 400.654429ms for pod "kube-apiserver-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:34.290241  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:34.485359  256536 request.go:632] Waited for 195.022891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193
	I0920 18:01:34.485429  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193
	I0920 18:01:34.485446  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:34.485459  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:34.485466  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:34.489143  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:34.685371  256536 request.go:632] Waited for 195.396623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:34.685455  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:34.685461  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:34.685471  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:34.685477  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:34.688902  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:34.689635  256536 pod_ready.go:93] pod "kube-controller-manager-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:34.689658  256536 pod_ready.go:82] duration metric: took 399.407979ms for pod "kube-controller-manager-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:34.689671  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:34.885295  256536 request.go:632] Waited for 195.53866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m02
	I0920 18:01:34.885360  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m02
	I0920 18:01:34.885365  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:34.885373  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:34.885377  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:34.888992  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.085267  256536 request.go:632] Waited for 195.362009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:35.085328  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:35.085334  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:35.085345  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:35.085356  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:35.088980  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.090052  256536 pod_ready.go:93] pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:35.090080  256536 pod_ready.go:82] duration metric: took 400.399772ms for pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:35.090093  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:35.285052  256536 request.go:632] Waited for 194.845569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m03
	I0920 18:01:35.285131  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m03
	I0920 18:01:35.285140  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:35.285150  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:35.285160  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:35.288701  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.484934  256536 request.go:632] Waited for 195.307179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:35.485011  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:35.485016  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:35.485024  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:35.485033  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:35.488224  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.488823  256536 pod_ready.go:93] pod "kube-controller-manager-ha-347193-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:35.488842  256536 pod_ready.go:82] duration metric: took 398.741341ms for pod "kube-controller-manager-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:35.488859  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ffdvq" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:35.684978  256536 request.go:632] Waited for 196.047954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ffdvq
	I0920 18:01:35.685045  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ffdvq
	I0920 18:01:35.685051  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:35.685059  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:35.685063  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:35.689004  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.885928  256536 request.go:632] Waited for 196.269085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:35.886004  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:35.886014  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:35.886025  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:35.886035  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:35.889926  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.890483  256536 pod_ready.go:93] pod "kube-proxy-ffdvq" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:35.890511  256536 pod_ready.go:82] duration metric: took 401.643812ms for pod "kube-proxy-ffdvq" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:35.890526  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pccxp" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:36.085261  256536 request.go:632] Waited for 194.62795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pccxp
	I0920 18:01:36.085385  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pccxp
	I0920 18:01:36.085393  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:36.085402  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:36.085408  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:36.089652  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:36.285734  256536 request.go:632] Waited for 195.416978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:36.285799  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:36.285804  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:36.285812  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:36.285816  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:36.289287  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:36.289898  256536 pod_ready.go:93] pod "kube-proxy-pccxp" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:36.289950  256536 pod_ready.go:82] duration metric: took 399.411009ms for pod "kube-proxy-pccxp" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:36.289967  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rdqkg" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:36.484907  256536 request.go:632] Waited for 194.838014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdqkg
	I0920 18:01:36.485002  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdqkg
	I0920 18:01:36.485015  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:36.485026  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:36.485035  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:36.488569  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:36.685854  256536 request.go:632] Waited for 196.449208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:36.685961  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:36.685971  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:36.685979  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:36.685982  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:36.690267  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:36.691030  256536 pod_ready.go:93] pod "kube-proxy-rdqkg" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:36.691060  256536 pod_ready.go:82] duration metric: took 401.083761ms for pod "kube-proxy-rdqkg" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:36.691073  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:36.884877  256536 request.go:632] Waited for 193.713134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193
	I0920 18:01:36.884990  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193
	I0920 18:01:36.885002  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:36.885014  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:36.885023  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:36.888846  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:37.086004  256536 request.go:632] Waited for 196.564771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:37.086085  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:37.086094  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.086106  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.086115  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.090524  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:37.091265  256536 pod_ready.go:93] pod "kube-scheduler-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:37.091290  256536 pod_ready.go:82] duration metric: took 400.207966ms for pod "kube-scheduler-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:37.091300  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:37.285288  256536 request.go:632] Waited for 193.886376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m02
	I0920 18:01:37.285368  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m02
	I0920 18:01:37.285376  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.285388  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.285396  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.288742  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:37.485296  256536 request.go:632] Waited for 196.041594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:37.485365  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:37.485370  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.485379  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.485382  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.488438  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:37.488873  256536 pod_ready.go:93] pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:37.488894  256536 pod_ready.go:82] duration metric: took 397.585949ms for pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:37.488904  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:37.684947  256536 request.go:632] Waited for 195.929511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m03
	I0920 18:01:37.685019  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m03
	I0920 18:01:37.685027  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.685037  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.685042  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.688698  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:37.885884  256536 request.go:632] Waited for 196.412935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:37.885988  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:37.885998  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.886006  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.886010  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.889509  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:37.890123  256536 pod_ready.go:93] pod "kube-scheduler-ha-347193-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:37.890146  256536 pod_ready.go:82] duration metric: took 401.23569ms for pod "kube-scheduler-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:37.890158  256536 pod_ready.go:39] duration metric: took 5.201039475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:01:37.890178  256536 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:01:37.890240  256536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:01:37.905594  256536 api_server.go:72] duration metric: took 23.085026432s to wait for apiserver process to appear ...
	I0920 18:01:37.905621  256536 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:01:37.905659  256536 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I0920 18:01:37.910576  256536 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I0920 18:01:37.910667  256536 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I0920 18:01:37.910679  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.910691  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.910701  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.911708  256536 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0920 18:01:37.911795  256536 api_server.go:141] control plane version: v1.31.1
	I0920 18:01:37.911813  256536 api_server.go:131] duration metric: took 6.185417ms to wait for apiserver health ...
	I0920 18:01:37.911822  256536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:01:38.085341  256536 request.go:632] Waited for 173.386572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:01:38.085419  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:01:38.085431  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:38.085456  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:38.085465  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:38.091784  256536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 18:01:38.097649  256536 system_pods.go:59] 24 kube-system pods found
	I0920 18:01:38.097681  256536 system_pods.go:61] "coredns-7c65d6cfc9-6llmd" [8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92] Running
	I0920 18:01:38.097686  256536 system_pods.go:61] "coredns-7c65d6cfc9-bkmhn" [f7862a6e-54cc-450c-b283-d20fb99f51ce] Running
	I0920 18:01:38.097691  256536 system_pods.go:61] "etcd-ha-347193" [e13fc198-b02b-4f0a-bf76-be0f519d9d57] Running
	I0920 18:01:38.097695  256536 system_pods.go:61] "etcd-ha-347193-m02" [4ea69953-b35a-4ae9-8153-cea3be5e2c1c] Running
	I0920 18:01:38.097698  256536 system_pods.go:61] "etcd-ha-347193-m03" [e83dd2f3-86bc-466d-9913-390f756db956] Running
	I0920 18:01:38.097701  256536 system_pods.go:61] "kindnet-5msnk" [af184b84-65ce-4ba0-879e-87ec81029f7e] Running
	I0920 18:01:38.097705  256536 system_pods.go:61] "kindnet-cqbxl" [3d49a6b1-5be5-4d96-98e3-bd05035a2d1b] Running
	I0920 18:01:38.097708  256536 system_pods.go:61] "kindnet-z24zp" [9271d251-2d95-4b23-85f3-7da6567b2fc3] Running
	I0920 18:01:38.097711  256536 system_pods.go:61] "kube-apiserver-ha-347193" [993ccf05-a39a-42b4-b82d-936531325dc4] Running
	I0920 18:01:38.097714  256536 system_pods.go:61] "kube-apiserver-ha-347193-m02" [43cd77b8-8925-4a04-a8cf-1b9a0cbbc502] Running
	I0920 18:01:38.097718  256536 system_pods.go:61] "kube-apiserver-ha-347193-m03" [02b7bcea-c245-4b1e-9be5-e815d4aceb74] Running
	I0920 18:01:38.097721  256536 system_pods.go:61] "kube-controller-manager-ha-347193" [6de3a14b-6587-45d4-aaee-1256b9c327cc] Running
	I0920 18:01:38.097724  256536 system_pods.go:61] "kube-controller-manager-ha-347193-m02" [cdf3f4d7-0675-4c59-8ad5-8901104d71c3] Running
	I0920 18:01:38.097727  256536 system_pods.go:61] "kube-controller-manager-ha-347193-m03" [3a4a0044-50e7-475a-9be9-76edda1c27ab] Running
	I0920 18:01:38.097729  256536 system_pods.go:61] "kube-proxy-ffdvq" [97120f62-0af2-405a-b8ff-639c72a39a2d] Running
	I0920 18:01:38.097732  256536 system_pods.go:61] "kube-proxy-pccxp" [3a4882b7-f59f-47d4-b2dc-d5b7f8f0d2c7] Running
	I0920 18:01:38.097735  256536 system_pods.go:61] "kube-proxy-rdqkg" [d9ae4e37-b29b-400a-af2d-544da4024069] Running
	I0920 18:01:38.097738  256536 system_pods.go:61] "kube-scheduler-ha-347193" [910baa0e-404e-4ac7-9262-848672eaf9cf] Running
	I0920 18:01:38.097743  256536 system_pods.go:61] "kube-scheduler-ha-347193-m02" [623b9c3b-b998-4516-a53e-17e9d8970594] Running
	I0920 18:01:38.097749  256536 system_pods.go:61] "kube-scheduler-ha-347193-m03" [cd08009b-7b3e-4c73-a2a0-824d43a19c0e] Running
	I0920 18:01:38.097751  256536 system_pods.go:61] "kube-vip-ha-347193" [20d6faa4-600f-4bd0-8acb-1f95c047da58] Running
	I0920 18:01:38.097754  256536 system_pods.go:61] "kube-vip-ha-347193-m02" [1455826c-7b3d-40f7-bb15-a9861ee95e19] Running
	I0920 18:01:38.097757  256536 system_pods.go:61] "kube-vip-ha-347193-m03" [d6b869ce-4510-400c-b8e9-6e3bec9718e4] Running
	I0920 18:01:38.097759  256536 system_pods.go:61] "storage-provisioner" [8924f7ce-85a0-4587-9c05-8a74c7113e9e] Running
	I0920 18:01:38.097766  256536 system_pods.go:74] duration metric: took 185.936377ms to wait for pod list to return data ...
	I0920 18:01:38.097773  256536 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:01:38.285212  256536 request.go:632] Waited for 187.355991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:01:38.285280  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:01:38.285285  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:38.285293  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:38.285298  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:38.290019  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:38.290139  256536 default_sa.go:45] found service account: "default"
	I0920 18:01:38.290156  256536 default_sa.go:55] duration metric: took 192.375892ms for default service account to be created ...
	I0920 18:01:38.290165  256536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:01:38.485546  256536 request.go:632] Waited for 195.287049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:01:38.485611  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:01:38.485616  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:38.485641  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:38.485645  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:38.491609  256536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:01:38.498558  256536 system_pods.go:86] 24 kube-system pods found
	I0920 18:01:38.498588  256536 system_pods.go:89] "coredns-7c65d6cfc9-6llmd" [8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92] Running
	I0920 18:01:38.498594  256536 system_pods.go:89] "coredns-7c65d6cfc9-bkmhn" [f7862a6e-54cc-450c-b283-d20fb99f51ce] Running
	I0920 18:01:38.498598  256536 system_pods.go:89] "etcd-ha-347193" [e13fc198-b02b-4f0a-bf76-be0f519d9d57] Running
	I0920 18:01:38.498602  256536 system_pods.go:89] "etcd-ha-347193-m02" [4ea69953-b35a-4ae9-8153-cea3be5e2c1c] Running
	I0920 18:01:38.498606  256536 system_pods.go:89] "etcd-ha-347193-m03" [e83dd2f3-86bc-466d-9913-390f756db956] Running
	I0920 18:01:38.498610  256536 system_pods.go:89] "kindnet-5msnk" [af184b84-65ce-4ba0-879e-87ec81029f7e] Running
	I0920 18:01:38.498614  256536 system_pods.go:89] "kindnet-cqbxl" [3d49a6b1-5be5-4d96-98e3-bd05035a2d1b] Running
	I0920 18:01:38.498618  256536 system_pods.go:89] "kindnet-z24zp" [9271d251-2d95-4b23-85f3-7da6567b2fc3] Running
	I0920 18:01:38.498622  256536 system_pods.go:89] "kube-apiserver-ha-347193" [993ccf05-a39a-42b4-b82d-936531325dc4] Running
	I0920 18:01:38.498625  256536 system_pods.go:89] "kube-apiserver-ha-347193-m02" [43cd77b8-8925-4a04-a8cf-1b9a0cbbc502] Running
	I0920 18:01:38.498629  256536 system_pods.go:89] "kube-apiserver-ha-347193-m03" [02b7bcea-c245-4b1e-9be5-e815d4aceb74] Running
	I0920 18:01:38.498634  256536 system_pods.go:89] "kube-controller-manager-ha-347193" [6de3a14b-6587-45d4-aaee-1256b9c327cc] Running
	I0920 18:01:38.498637  256536 system_pods.go:89] "kube-controller-manager-ha-347193-m02" [cdf3f4d7-0675-4c59-8ad5-8901104d71c3] Running
	I0920 18:01:38.498641  256536 system_pods.go:89] "kube-controller-manager-ha-347193-m03" [3a4a0044-50e7-475a-9be9-76edda1c27ab] Running
	I0920 18:01:38.498644  256536 system_pods.go:89] "kube-proxy-ffdvq" [97120f62-0af2-405a-b8ff-639c72a39a2d] Running
	I0920 18:01:38.498647  256536 system_pods.go:89] "kube-proxy-pccxp" [3a4882b7-f59f-47d4-b2dc-d5b7f8f0d2c7] Running
	I0920 18:01:38.498653  256536 system_pods.go:89] "kube-proxy-rdqkg" [d9ae4e37-b29b-400a-af2d-544da4024069] Running
	I0920 18:01:38.498658  256536 system_pods.go:89] "kube-scheduler-ha-347193" [910baa0e-404e-4ac7-9262-848672eaf9cf] Running
	I0920 18:01:38.498662  256536 system_pods.go:89] "kube-scheduler-ha-347193-m02" [623b9c3b-b998-4516-a53e-17e9d8970594] Running
	I0920 18:01:38.498666  256536 system_pods.go:89] "kube-scheduler-ha-347193-m03" [cd08009b-7b3e-4c73-a2a0-824d43a19c0e] Running
	I0920 18:01:38.498669  256536 system_pods.go:89] "kube-vip-ha-347193" [20d6faa4-600f-4bd0-8acb-1f95c047da58] Running
	I0920 18:01:38.498673  256536 system_pods.go:89] "kube-vip-ha-347193-m02" [1455826c-7b3d-40f7-bb15-a9861ee95e19] Running
	I0920 18:01:38.498677  256536 system_pods.go:89] "kube-vip-ha-347193-m03" [d6b869ce-4510-400c-b8e9-6e3bec9718e4] Running
	I0920 18:01:38.498684  256536 system_pods.go:89] "storage-provisioner" [8924f7ce-85a0-4587-9c05-8a74c7113e9e] Running
	I0920 18:01:38.498690  256536 system_pods.go:126] duration metric: took 208.521056ms to wait for k8s-apps to be running ...
	I0920 18:01:38.498697  256536 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:01:38.498743  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:01:38.514029  256536 system_svc.go:56] duration metric: took 15.320471ms WaitForService to wait for kubelet
	I0920 18:01:38.514065  256536 kubeadm.go:582] duration metric: took 23.693509389s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:01:38.514086  256536 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:01:38.685544  256536 request.go:632] Waited for 171.353571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I0920 18:01:38.685619  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I0920 18:01:38.685624  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:38.685632  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:38.685636  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:38.690050  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:38.691008  256536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:01:38.691029  256536 node_conditions.go:123] node cpu capacity is 2
	I0920 18:01:38.691041  256536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:01:38.691045  256536 node_conditions.go:123] node cpu capacity is 2
	I0920 18:01:38.691049  256536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:01:38.691051  256536 node_conditions.go:123] node cpu capacity is 2
	I0920 18:01:38.691055  256536 node_conditions.go:105] duration metric: took 176.963396ms to run NodePressure ...
	I0920 18:01:38.691067  256536 start.go:241] waiting for startup goroutines ...
	I0920 18:01:38.691085  256536 start.go:255] writing updated cluster config ...
	I0920 18:01:38.691394  256536 ssh_runner.go:195] Run: rm -f paused
	I0920 18:01:38.746142  256536 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:01:38.748440  256536 out.go:177] * Done! kubectl is now configured to use "ha-347193" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.762532264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855519762506866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fcc0cb0a-f25c-4ff4-8ee8-46bc30e03056 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.762981007Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d434963-3cb2-4b8c-ad61-1a14035f65d4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.763035016Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d434963-3cb2-4b8c-ad61-1a14035f65d4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.763316429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855304216814938,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f54f7a5f2c32d6bc0ec6ee35174b79a19554688494c5a052f414808aba6d5d3,PodSandboxId:1008d082466619b9dff1a593919ad42edc22d2689cb4c63ade9d89a2aa3d82cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855158873195435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158811750044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158740895692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54
cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172685514
7923720838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855146590131954,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3702c95ae17f31f21a9df60d65fe7c873d5a4a63c4bb0951d83c81da6fdcdcc9,PodSandboxId:79b3c32a6e6c014d62d3cf90229370a249daff625a451df26bc56b63f13b5011,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855137793174604,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40531f7fb6a94d470f366df1ed8127e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4,PodSandboxId:88ee68a7e316b7dd733350aa45479a511371c952904195167a88e9851da02e65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855135226742097,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855135139365214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855135143406129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09,PodSandboxId:a832aed299e3faf778cd7e1ebb68848a5f31d0f1bbd92c129bcc7511f62ef4df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855135082024585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d434963-3cb2-4b8c-ad61-1a14035f65d4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.809736448Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=83efb26c-c727-4ee8-ad67-c851bde304db name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.809830820Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=83efb26c-c727-4ee8-ad67-c851bde304db name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.811527655Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5c60b876-606d-4878-9c1e-d611dbbd375b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.812256944Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855519812225204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c60b876-606d-4878-9c1e-d611dbbd375b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.813615251Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d523ace-18f0-41f1-a4da-29e7b470a3b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.813677719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d523ace-18f0-41f1-a4da-29e7b470a3b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.813921317Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855304216814938,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f54f7a5f2c32d6bc0ec6ee35174b79a19554688494c5a052f414808aba6d5d3,PodSandboxId:1008d082466619b9dff1a593919ad42edc22d2689cb4c63ade9d89a2aa3d82cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855158873195435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158811750044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158740895692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54
cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172685514
7923720838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855146590131954,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3702c95ae17f31f21a9df60d65fe7c873d5a4a63c4bb0951d83c81da6fdcdcc9,PodSandboxId:79b3c32a6e6c014d62d3cf90229370a249daff625a451df26bc56b63f13b5011,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855137793174604,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40531f7fb6a94d470f366df1ed8127e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4,PodSandboxId:88ee68a7e316b7dd733350aa45479a511371c952904195167a88e9851da02e65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855135226742097,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855135139365214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855135143406129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09,PodSandboxId:a832aed299e3faf778cd7e1ebb68848a5f31d0f1bbd92c129bcc7511f62ef4df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855135082024585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d523ace-18f0-41f1-a4da-29e7b470a3b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.866795057Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29fe742c-865c-46ff-bc7f-2462d2f1db61 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.866905308Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29fe742c-865c-46ff-bc7f-2462d2f1db61 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.868362410Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2dd51f39-4ab5-421b-8bc6-23e0c384f4c0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.868862214Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855519868834774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2dd51f39-4ab5-421b-8bc6-23e0c384f4c0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.869428297Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0660914c-7e00-4162-957c-a679c448b33a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.869502335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0660914c-7e00-4162-957c-a679c448b33a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.869753312Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855304216814938,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f54f7a5f2c32d6bc0ec6ee35174b79a19554688494c5a052f414808aba6d5d3,PodSandboxId:1008d082466619b9dff1a593919ad42edc22d2689cb4c63ade9d89a2aa3d82cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855158873195435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158811750044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158740895692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54
cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172685514
7923720838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855146590131954,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3702c95ae17f31f21a9df60d65fe7c873d5a4a63c4bb0951d83c81da6fdcdcc9,PodSandboxId:79b3c32a6e6c014d62d3cf90229370a249daff625a451df26bc56b63f13b5011,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855137793174604,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40531f7fb6a94d470f366df1ed8127e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4,PodSandboxId:88ee68a7e316b7dd733350aa45479a511371c952904195167a88e9851da02e65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855135226742097,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855135139365214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855135143406129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09,PodSandboxId:a832aed299e3faf778cd7e1ebb68848a5f31d0f1bbd92c129bcc7511f62ef4df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855135082024585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0660914c-7e00-4162-957c-a679c448b33a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.909412942Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2fba9a9f-c8df-42be-842a-846bd1e82b7e name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.909502048Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2fba9a9f-c8df-42be-842a-846bd1e82b7e name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.911071386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c01cb087-7c49-45f9-9a92-c1a51ee974c5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.911624828Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855519911595660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c01cb087-7c49-45f9-9a92-c1a51ee974c5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.912591146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=513d0ecc-8332-424b-a05e-afcdf398463e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.912661151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=513d0ecc-8332-424b-a05e-afcdf398463e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:19 ha-347193 crio[669]: time="2024-09-20 18:05:19.912896902Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855304216814938,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f54f7a5f2c32d6bc0ec6ee35174b79a19554688494c5a052f414808aba6d5d3,PodSandboxId:1008d082466619b9dff1a593919ad42edc22d2689cb4c63ade9d89a2aa3d82cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855158873195435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158811750044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158740895692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54
cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172685514
7923720838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855146590131954,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3702c95ae17f31f21a9df60d65fe7c873d5a4a63c4bb0951d83c81da6fdcdcc9,PodSandboxId:79b3c32a6e6c014d62d3cf90229370a249daff625a451df26bc56b63f13b5011,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855137793174604,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40531f7fb6a94d470f366df1ed8127e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4,PodSandboxId:88ee68a7e316b7dd733350aa45479a511371c952904195167a88e9851da02e65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855135226742097,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855135139365214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855135143406129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09,PodSandboxId:a832aed299e3faf778cd7e1ebb68848a5f31d0f1bbd92c129bcc7511f62ef4df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855135082024585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=513d0ecc-8332-424b-a05e-afcdf398463e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	24d13f339c817       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   d56c4fb5022a4       busybox-7dff88458-vv8nw
	6f54f7a5f2c32       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   1008d08246661       storage-provisioner
	998d6fb086954       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   cfb097797b519       coredns-7c65d6cfc9-6llmd
	4980eee34ad3b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   503157b6402f3       coredns-7c65d6cfc9-bkmhn
	54d750519756c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   d420593f085b4       kube-proxy-rdqkg
	ebfa9fcdc2495       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   db50f6f39d94c       kindnet-z24zp
	3702c95ae17f3       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   79b3c32a6e6c0       kube-vip-ha-347193
	dce6ebcdcfa25       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   88ee68a7e316b       kube-apiserver-ha-347193
	b9e6f76c6e332       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   3b399285f0a3e       etcd-ha-347193
	6cae0975e4bde       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   6700b91af83d5       kube-scheduler-ha-347193
	5db95e41c4eee       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   a832aed299e3f       kube-controller-manager-ha-347193
	
	
	==> coredns [4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01] <==
	[INFO] 10.244.1.2:54565 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.005838401s
	[INFO] 10.244.2.2:51366 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000199485s
	[INFO] 10.244.0.4:36108 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000120747s
	[INFO] 10.244.0.4:52405 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000754716s
	[INFO] 10.244.0.4:39912 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001939354s
	[INFO] 10.244.1.2:35811 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004936568s
	[INFO] 10.244.1.2:36016 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003046132s
	[INFO] 10.244.1.2:34653 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170016s
	[INFO] 10.244.1.2:59470 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145491s
	[INFO] 10.244.2.2:50581 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001424335s
	[INFO] 10.244.2.2:53657 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087743s
	[INFO] 10.244.0.4:45468 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002017081s
	[INFO] 10.244.0.4:50151 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148946s
	[INFO] 10.244.0.4:51594 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101915s
	[INFO] 10.244.0.4:54414 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114937s
	[INFO] 10.244.1.2:38701 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218522s
	[INFO] 10.244.1.2:41853 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000182128s
	[INFO] 10.244.2.2:48909 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169464s
	[INFO] 10.244.0.4:55409 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111385s
	[INFO] 10.244.1.2:58822 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137575s
	[INFO] 10.244.2.2:55178 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124535s
	[INFO] 10.244.2.2:44350 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150664s
	[INFO] 10.244.0.4:57962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114195s
	[INFO] 10.244.0.4:56551 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094805s
	[INFO] 10.244.0.4:45171 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054433s
	
	
	==> coredns [998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5] <==
	[INFO] 10.244.1.2:55559 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000283442s
	[INFO] 10.244.2.2:33784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188859s
	[INFO] 10.244.2.2:58215 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00186989s
	[INFO] 10.244.2.2:52774 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099748s
	[INFO] 10.244.2.2:38149 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001158s
	[INFO] 10.244.2.2:42221 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000113646s
	[INFO] 10.244.2.2:49599 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173465s
	[INFO] 10.244.0.4:60750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180138s
	[INFO] 10.244.0.4:46666 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171665s
	[INFO] 10.244.0.4:52002 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001444571s
	[INFO] 10.244.0.4:45151 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006024s
	[INFO] 10.244.1.2:34989 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195829s
	[INFO] 10.244.1.2:34116 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087145s
	[INFO] 10.244.2.2:41553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124108s
	[INFO] 10.244.2.2:35637 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116822s
	[INFO] 10.244.2.2:34355 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111835s
	[INFO] 10.244.0.4:48848 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165085s
	[INFO] 10.244.0.4:49930 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082351s
	[INFO] 10.244.0.4:35945 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077731s
	[INFO] 10.244.1.2:37666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145796s
	[INFO] 10.244.1.2:50941 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000259758s
	[INFO] 10.244.1.2:52591 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141872s
	[INFO] 10.244.2.2:39683 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141964s
	[INFO] 10.244.2.2:51672 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176831s
	[INFO] 10.244.0.4:58285 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000193464s
	
	
	==> describe nodes <==
	Name:               ha-347193
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_59_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:59:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:05:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:02:04 +0000   Fri, 20 Sep 2024 17:59:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:02:04 +0000   Fri, 20 Sep 2024 17:59:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:02:04 +0000   Fri, 20 Sep 2024 17:59:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:02:04 +0000   Fri, 20 Sep 2024 17:59:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-347193
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 24c3d61093c44fc4b2898b98b4bdbc70
	  System UUID:                24c3d610-93c4-4fc4-b289-8b98b4bdbc70
	  Boot ID:                    5638bfe2-e986-4137-9385-e18b7e4b519b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vv8nw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 coredns-7c65d6cfc9-6llmd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m14s
	  kube-system                 coredns-7c65d6cfc9-bkmhn             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m14s
	  kube-system                 etcd-ha-347193                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m19s
	  kube-system                 kindnet-z24zp                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m15s
	  kube-system                 kube-apiserver-ha-347193             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-controller-manager-ha-347193    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-proxy-rdqkg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-scheduler-ha-347193             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-vip-ha-347193                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m12s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m26s (x7 over 6m26s)  kubelet          Node ha-347193 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m26s (x8 over 6m26s)  kubelet          Node ha-347193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s (x8 over 6m26s)  kubelet          Node ha-347193 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m19s                  kubelet          Node ha-347193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s                  kubelet          Node ha-347193 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s                  kubelet          Node ha-347193 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Normal  NodeReady                6m2s                   kubelet          Node ha-347193 status is now: NodeReady
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	
	
	Name:               ha-347193-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_59_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:59:56 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:02:50 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 18:01:58 +0000   Fri, 20 Sep 2024 18:03:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 18:01:58 +0000   Fri, 20 Sep 2024 18:03:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 18:01:58 +0000   Fri, 20 Sep 2024 18:03:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 18:01:58 +0000   Fri, 20 Sep 2024 18:03:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-347193-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 325a97217aeb4c8f9cb24edad597fd25
	  System UUID:                325a9721-7aeb-4c8f-9cb2-4edad597fd25
	  Boot ID:                    bc33abb6-f61b-42e2-af43-631d2ede4061
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-85fk6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-347193-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m22s
	  kube-system                 kindnet-cqbxl                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m24s
	  kube-system                 kube-apiserver-ha-347193-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-controller-manager-ha-347193-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-proxy-ffdvq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-scheduler-ha-347193-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-vip-ha-347193-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node ha-347193-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node ha-347193-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node ha-347193-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  NodeNotReady             110s                   node-controller  Node ha-347193-m02 status is now: NodeNotReady
	
	
	Name:               ha-347193-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_01_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:01:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:05:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:02:13 +0000   Fri, 20 Sep 2024 18:01:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:02:13 +0000   Fri, 20 Sep 2024 18:01:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:02:13 +0000   Fri, 20 Sep 2024 18:01:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:02:13 +0000   Fri, 20 Sep 2024 18:01:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-347193-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 987815694814485e84522bfda359ab42
	  System UUID:                98781569-4814-485e-8452-2bfda359ab42
	  Boot ID:                    fc58e56d-3ed2-412a-b9e5-cb7d5fb81d74
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p824h                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-347193-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m7s
	  kube-system                 kindnet-5msnk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m9s
	  kube-system                 kube-apiserver-ha-347193-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-controller-manager-ha-347193-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-proxy-pccxp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-scheduler-ha-347193-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-vip-ha-347193-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m5s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node ha-347193-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node ha-347193-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node ha-347193-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-347193-m03 event: Registered Node ha-347193-m03 in Controller
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-347193-m03 event: Registered Node ha-347193-m03 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-347193-m03 event: Registered Node ha-347193-m03 in Controller
	
	
	Name:               ha-347193-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_02_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:02:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:05:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:02:49 +0000   Fri, 20 Sep 2024 18:02:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:02:49 +0000   Fri, 20 Sep 2024 18:02:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:02:49 +0000   Fri, 20 Sep 2024 18:02:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:02:49 +0000   Fri, 20 Sep 2024 18:02:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    ha-347193-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 36beb0176a7e4c449ee02f4adaf970e8
	  System UUID:                36beb017-6a7e-4c44-9ee0-2f4adaf970e8
	  Boot ID:                    347456dd-4ba6-4d92-bdee-958017f6c085
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-t5f94       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-gtwzd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  CIDRAssignmentFailed     3m2s                 cidrAllocator    Node ha-347193-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-347193-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-347193-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-347193-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-347193-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep20 17:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051116] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037930] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.768779] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.874615] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.547112] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.314105] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.055929] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059483] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.173430] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.132192] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.252987] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.876503] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +5.009721] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.059883] kauditd_printk_skb: 158 callbacks suppressed
	[Sep20 17:59] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.095619] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.048443] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.211237] kauditd_printk_skb: 38 callbacks suppressed
	[Sep20 18:00] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d] <==
	{"level":"warn","ts":"2024-09-20T18:05:20.070987Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.156883Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.196877Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.208491Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.214427Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.227115Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.236950Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.246060Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.250637Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.259259Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.259724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.267101Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.275850Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.283994Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.297551Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.301765Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.308254Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.316172Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.328316Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.334224Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.339691Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.345095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.354332Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.356925Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:20.364686Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:05:20 up 6 min,  0 users,  load average: 0.17, 0.26, 0.14
	Linux ha-347193 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5] <==
	I0920 18:04:47.652746       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:04:57.660425       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:04:57.660717       1 main.go:299] handling current node
	I0920 18:04:57.660765       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:04:57.660804       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:04:57.661001       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:04:57.661174       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:04:57.662429       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:04:57.662486       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:05:07.652020       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:05:07.652151       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:05:07.652431       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:05:07.652468       1 main.go:299] handling current node
	I0920 18:05:07.652492       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:05:07.652521       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:05:07.652622       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:05:07.652641       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:05:17.653044       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:05:17.653093       1 main.go:299] handling current node
	I0920 18:05:17.653117       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:05:17.653124       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:05:17.653356       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:05:17.653380       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:05:17.653452       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:05:17.653459       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4] <==
	I0920 17:59:00.040430       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0920 17:59:00.047869       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.246]
	I0920 17:59:00.048849       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 17:59:00.053986       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 17:59:00.261870       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 17:59:01.502885       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 17:59:01.521849       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0920 17:59:01.592823       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 17:59:05.362201       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0920 17:59:05.964721       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0920 18:01:45.789676       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35350: use of closed network connection
	E0920 18:01:45.987221       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35370: use of closed network connection
	E0920 18:01:46.203136       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35382: use of closed network connection
	E0920 18:01:46.410018       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35390: use of closed network connection
	E0920 18:01:46.596914       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35412: use of closed network connection
	E0920 18:01:46.785733       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35422: use of closed network connection
	E0920 18:01:46.963707       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35432: use of closed network connection
	E0920 18:01:47.352644       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35476: use of closed network connection
	E0920 18:01:47.677101       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35494: use of closed network connection
	E0920 18:01:47.852966       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35506: use of closed network connection
	E0920 18:01:48.037422       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35530: use of closed network connection
	E0920 18:01:48.215519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35552: use of closed network connection
	E0920 18:01:48.395158       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35562: use of closed network connection
	E0920 18:01:48.571105       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35590: use of closed network connection
	W0920 18:03:10.061403       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.246 192.168.39.250]
	
	
	==> kube-controller-manager [5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09] <==
	I0920 18:02:18.858784       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:18.864042       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:18.983762       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	E0920 18:02:18.993581       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"f4761ca4-6943-48ac-a03b-0da33530a65b\", ResourceVersion:\"914\", Generation:1, CreationTimestamp:time.Date(2024, time.September, 20, 17, 59, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0025004a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\
", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource
)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00281e400), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0024c5650), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolum
eSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVo
lumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0024c5668), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtua
lDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.31.1\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc0025004e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Res
ourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\
"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0026a90e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002a3e7a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002939a00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Host
Alias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002991a60)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002a3e800)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfille
d on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0920 18:02:19.395187       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:19.617626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:19.719171       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:19.749178       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:20.458068       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-347193-m04"
	I0920 18:02:20.458612       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:20.582907       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:28.989089       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:38.284793       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-347193-m04"
	I0920 18:02:38.284953       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:38.304718       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:39.566872       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:49.271755       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:03:30.485154       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-347193-m04"
	I0920 18:03:30.485472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	I0920 18:03:30.507411       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	I0920 18:03:30.648029       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="83.389173ms"
	I0920 18:03:30.648163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.297µs"
	I0920 18:03:34.617103       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	I0920 18:03:35.794110       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	
	
	==> kube-proxy [54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:59:08.146402       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:59:08.169465       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.246"]
	E0920 17:59:08.169636       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:59:08.200549       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:59:08.200672       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:59:08.200715       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:59:08.203687       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:59:08.204074       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:59:08.204250       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:59:08.207892       1 config.go:199] "Starting service config controller"
	I0920 17:59:08.208388       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:59:08.208680       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:59:08.211000       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:59:08.208820       1 config.go:328] "Starting node config controller"
	I0920 17:59:08.211110       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:59:08.308818       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:59:08.311223       1 shared_informer.go:320] Caches are synced for node config
	I0920 17:59:08.311448       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f] <==
	W0920 17:58:59.136078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 17:58:59.136125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.152907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 17:58:59.152970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.232222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:58:59.232522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.417181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 17:58:59.417310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.425477       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 17:58:59.426116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.425550       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:58:59.426253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.487540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:58:59.487590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.537813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:58:59.537936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.543453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:58:59.543567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.650341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:58:59.650386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 17:59:01.377349       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 18:02:18.846875       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-t5f94\": pod kindnet-t5f94 is already assigned to node \"ha-347193-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-t5f94" node="ha-347193-m04"
	E0920 18:02:18.847041       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 33dab94e-9da4-4a58-83f6-a7a351c8c216(kube-system/kindnet-t5f94) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-t5f94"
	E0920 18:02:18.847081       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-t5f94\": pod kindnet-t5f94 is already assigned to node \"ha-347193-m04\"" pod="kube-system/kindnet-t5f94"
	I0920 18:02:18.847108       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-t5f94" node="ha-347193-m04"
	
	
	==> kubelet <==
	Sep 20 18:04:01 ha-347193 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:04:01 ha-347193 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:04:01 ha-347193 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:04:01 ha-347193 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:04:01 ha-347193 kubelet[1310]: E0920 18:04:01.734604    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855441733710654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:01 ha-347193 kubelet[1310]: E0920 18:04:01.734645    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855441733710654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:11 ha-347193 kubelet[1310]: E0920 18:04:11.739024    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855451738567159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:11 ha-347193 kubelet[1310]: E0920 18:04:11.739486    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855451738567159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:21 ha-347193 kubelet[1310]: E0920 18:04:21.742025    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855461741705291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:21 ha-347193 kubelet[1310]: E0920 18:04:21.742425    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855461741705291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:31 ha-347193 kubelet[1310]: E0920 18:04:31.746728    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855471746051287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:31 ha-347193 kubelet[1310]: E0920 18:04:31.746767    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855471746051287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:41 ha-347193 kubelet[1310]: E0920 18:04:41.748167    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855481747854827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:41 ha-347193 kubelet[1310]: E0920 18:04:41.748208    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855481747854827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:51 ha-347193 kubelet[1310]: E0920 18:04:51.749843    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855491749438137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:51 ha-347193 kubelet[1310]: E0920 18:04:51.750125    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855491749438137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:01 ha-347193 kubelet[1310]: E0920 18:05:01.623246    1310 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:05:01 ha-347193 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:05:01 ha-347193 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:05:01 ha-347193 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:05:01 ha-347193 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:05:01 ha-347193 kubelet[1310]: E0920 18:05:01.752334    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855501751934283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:01 ha-347193 kubelet[1310]: E0920 18:05:01.752368    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855501751934283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:11 ha-347193 kubelet[1310]: E0920 18:05:11.756769    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855511755858702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:11 ha-347193 kubelet[1310]: E0920 18:05:11.757336    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855511755858702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-347193 -n ha-347193
helpers_test.go:261: (dbg) Run:  kubectl --context ha-347193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.444076411s)
ha_test.go:413: expected profile "ha-347193" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-347193\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-347193\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-347193\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.246\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.241\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.250\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.234\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,
\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize
\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-347193 -n ha-347193
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-347193 logs -n 25: (1.379699195s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3833348347/001/cp-test_ha-347193-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193:/home/docker/cp-test_ha-347193-m03_ha-347193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193 sudo cat                                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m03_ha-347193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m02:/home/docker/cp-test_ha-347193-m03_ha-347193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m02 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m03_ha-347193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04:/home/docker/cp-test_ha-347193-m03_ha-347193-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m04 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m03_ha-347193-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-347193 cp testdata/cp-test.txt                                                | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3833348347/001/cp-test_ha-347193-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193:/home/docker/cp-test_ha-347193-m04_ha-347193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193 sudo cat                                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m04_ha-347193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m02:/home/docker/cp-test_ha-347193-m04_ha-347193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m02 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m04_ha-347193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03:/home/docker/cp-test_ha-347193-m04_ha-347193-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m03 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m04_ha-347193-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-347193 node stop m02 -v=7                                                     | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:58:19
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:58:19.719554  256536 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:58:19.719784  256536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:58:19.719792  256536 out.go:358] Setting ErrFile to fd 2...
	I0920 17:58:19.719796  256536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:58:19.719960  256536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 17:58:19.720540  256536 out.go:352] Setting JSON to false
	I0920 17:58:19.721444  256536 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6043,"bootTime":1726849057,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:58:19.721554  256536 start.go:139] virtualization: kvm guest
	I0920 17:58:19.723941  256536 out.go:177] * [ha-347193] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:58:19.725468  256536 notify.go:220] Checking for updates...
	I0920 17:58:19.725480  256536 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 17:58:19.727002  256536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:58:19.728644  256536 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:58:19.730001  256536 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:58:19.731378  256536 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:58:19.732922  256536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:58:19.734763  256536 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:58:19.774481  256536 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 17:58:19.776642  256536 start.go:297] selected driver: kvm2
	I0920 17:58:19.776667  256536 start.go:901] validating driver "kvm2" against <nil>
	I0920 17:58:19.776681  256536 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:58:19.777528  256536 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:58:19.777634  256536 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:58:19.794619  256536 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:58:19.795141  256536 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:58:19.795583  256536 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:58:19.795675  256536 cni.go:84] Creating CNI manager for ""
	I0920 17:58:19.795761  256536 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 17:58:19.795792  256536 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 17:58:19.795946  256536 start.go:340] cluster config:
	{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0920 17:58:19.796187  256536 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:58:19.798837  256536 out.go:177] * Starting "ha-347193" primary control-plane node in "ha-347193" cluster
	I0920 17:58:19.800296  256536 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:58:19.800352  256536 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 17:58:19.800362  256536 cache.go:56] Caching tarball of preloaded images
	I0920 17:58:19.800459  256536 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:58:19.800470  256536 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:58:19.800790  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 17:58:19.800819  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json: {Name:mkfd3b988e8aa616e3cc88608f2502239f4ba220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:19.800990  256536 start.go:360] acquireMachinesLock for ha-347193: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:58:19.801023  256536 start.go:364] duration metric: took 17.719µs to acquireMachinesLock for "ha-347193"
	I0920 17:58:19.801041  256536 start.go:93] Provisioning new machine with config: &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:58:19.801110  256536 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 17:58:19.803289  256536 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:58:19.803488  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:58:19.803546  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:58:19.819050  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41053
	I0920 17:58:19.819630  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:58:19.820279  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:58:19.820296  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:58:19.820691  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:58:19.820938  256536 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 17:58:19.821115  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:19.821335  256536 start.go:159] libmachine.API.Create for "ha-347193" (driver="kvm2")
	I0920 17:58:19.821366  256536 client.go:168] LocalClient.Create starting
	I0920 17:58:19.821397  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 17:58:19.821431  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 17:58:19.821444  256536 main.go:141] libmachine: Parsing certificate...
	I0920 17:58:19.821515  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 17:58:19.821537  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 17:58:19.821546  256536 main.go:141] libmachine: Parsing certificate...
	I0920 17:58:19.821560  256536 main.go:141] libmachine: Running pre-create checks...
	I0920 17:58:19.821570  256536 main.go:141] libmachine: (ha-347193) Calling .PreCreateCheck
	I0920 17:58:19.821998  256536 main.go:141] libmachine: (ha-347193) Calling .GetConfigRaw
	I0920 17:58:19.822485  256536 main.go:141] libmachine: Creating machine...
	I0920 17:58:19.822507  256536 main.go:141] libmachine: (ha-347193) Calling .Create
	I0920 17:58:19.822712  256536 main.go:141] libmachine: (ha-347193) Creating KVM machine...
	I0920 17:58:19.824224  256536 main.go:141] libmachine: (ha-347193) DBG | found existing default KVM network
	I0920 17:58:19.824984  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:19.824842  256559 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000221330}
	I0920 17:58:19.825024  256536 main.go:141] libmachine: (ha-347193) DBG | created network xml: 
	I0920 17:58:19.825037  256536 main.go:141] libmachine: (ha-347193) DBG | <network>
	I0920 17:58:19.825044  256536 main.go:141] libmachine: (ha-347193) DBG |   <name>mk-ha-347193</name>
	I0920 17:58:19.825049  256536 main.go:141] libmachine: (ha-347193) DBG |   <dns enable='no'/>
	I0920 17:58:19.825054  256536 main.go:141] libmachine: (ha-347193) DBG |   
	I0920 17:58:19.825061  256536 main.go:141] libmachine: (ha-347193) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 17:58:19.825067  256536 main.go:141] libmachine: (ha-347193) DBG |     <dhcp>
	I0920 17:58:19.825072  256536 main.go:141] libmachine: (ha-347193) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 17:58:19.825079  256536 main.go:141] libmachine: (ha-347193) DBG |     </dhcp>
	I0920 17:58:19.825084  256536 main.go:141] libmachine: (ha-347193) DBG |   </ip>
	I0920 17:58:19.825090  256536 main.go:141] libmachine: (ha-347193) DBG |   
	I0920 17:58:19.825094  256536 main.go:141] libmachine: (ha-347193) DBG | </network>
	I0920 17:58:19.825099  256536 main.go:141] libmachine: (ha-347193) DBG | 
	I0920 17:58:19.830808  256536 main.go:141] libmachine: (ha-347193) DBG | trying to create private KVM network mk-ha-347193 192.168.39.0/24...
	I0920 17:58:19.907893  256536 main.go:141] libmachine: (ha-347193) DBG | private KVM network mk-ha-347193 192.168.39.0/24 created
	I0920 17:58:19.907950  256536 main.go:141] libmachine: (ha-347193) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193 ...
	I0920 17:58:19.907968  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:19.907787  256559 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:58:19.907992  256536 main.go:141] libmachine: (ha-347193) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 17:58:19.908014  256536 main.go:141] libmachine: (ha-347193) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 17:58:20.183507  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:20.183335  256559 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa...
	I0920 17:58:20.394510  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:20.394309  256559 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/ha-347193.rawdisk...
	I0920 17:58:20.394561  256536 main.go:141] libmachine: (ha-347193) DBG | Writing magic tar header
	I0920 17:58:20.394576  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193 (perms=drwx------)
	I0920 17:58:20.394593  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:58:20.394599  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 17:58:20.394610  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 17:58:20.394615  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:58:20.394629  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:58:20.394637  256536 main.go:141] libmachine: (ha-347193) DBG | Writing SSH key tar header
	I0920 17:58:20.394645  256536 main.go:141] libmachine: (ha-347193) Creating domain...
	I0920 17:58:20.394695  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:20.394434  256559 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193 ...
	I0920 17:58:20.394726  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193
	I0920 17:58:20.394740  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 17:58:20.394750  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:58:20.394760  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 17:58:20.394766  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:58:20.394776  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:58:20.394781  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home
	I0920 17:58:20.394791  256536 main.go:141] libmachine: (ha-347193) DBG | Skipping /home - not owner
	I0920 17:58:20.396055  256536 main.go:141] libmachine: (ha-347193) define libvirt domain using xml: 
	I0920 17:58:20.396079  256536 main.go:141] libmachine: (ha-347193) <domain type='kvm'>
	I0920 17:58:20.396085  256536 main.go:141] libmachine: (ha-347193)   <name>ha-347193</name>
	I0920 17:58:20.396090  256536 main.go:141] libmachine: (ha-347193)   <memory unit='MiB'>2200</memory>
	I0920 17:58:20.396095  256536 main.go:141] libmachine: (ha-347193)   <vcpu>2</vcpu>
	I0920 17:58:20.396099  256536 main.go:141] libmachine: (ha-347193)   <features>
	I0920 17:58:20.396104  256536 main.go:141] libmachine: (ha-347193)     <acpi/>
	I0920 17:58:20.396108  256536 main.go:141] libmachine: (ha-347193)     <apic/>
	I0920 17:58:20.396113  256536 main.go:141] libmachine: (ha-347193)     <pae/>
	I0920 17:58:20.396121  256536 main.go:141] libmachine: (ha-347193)     
	I0920 17:58:20.396125  256536 main.go:141] libmachine: (ha-347193)   </features>
	I0920 17:58:20.396130  256536 main.go:141] libmachine: (ha-347193)   <cpu mode='host-passthrough'>
	I0920 17:58:20.396135  256536 main.go:141] libmachine: (ha-347193)   
	I0920 17:58:20.396139  256536 main.go:141] libmachine: (ha-347193)   </cpu>
	I0920 17:58:20.396144  256536 main.go:141] libmachine: (ha-347193)   <os>
	I0920 17:58:20.396150  256536 main.go:141] libmachine: (ha-347193)     <type>hvm</type>
	I0920 17:58:20.396155  256536 main.go:141] libmachine: (ha-347193)     <boot dev='cdrom'/>
	I0920 17:58:20.396161  256536 main.go:141] libmachine: (ha-347193)     <boot dev='hd'/>
	I0920 17:58:20.396220  256536 main.go:141] libmachine: (ha-347193)     <bootmenu enable='no'/>
	I0920 17:58:20.396253  256536 main.go:141] libmachine: (ha-347193)   </os>
	I0920 17:58:20.396265  256536 main.go:141] libmachine: (ha-347193)   <devices>
	I0920 17:58:20.396277  256536 main.go:141] libmachine: (ha-347193)     <disk type='file' device='cdrom'>
	I0920 17:58:20.396294  256536 main.go:141] libmachine: (ha-347193)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/boot2docker.iso'/>
	I0920 17:58:20.396309  256536 main.go:141] libmachine: (ha-347193)       <target dev='hdc' bus='scsi'/>
	I0920 17:58:20.396321  256536 main.go:141] libmachine: (ha-347193)       <readonly/>
	I0920 17:58:20.396335  256536 main.go:141] libmachine: (ha-347193)     </disk>
	I0920 17:58:20.396350  256536 main.go:141] libmachine: (ha-347193)     <disk type='file' device='disk'>
	I0920 17:58:20.396362  256536 main.go:141] libmachine: (ha-347193)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:58:20.396376  256536 main.go:141] libmachine: (ha-347193)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/ha-347193.rawdisk'/>
	I0920 17:58:20.396387  256536 main.go:141] libmachine: (ha-347193)       <target dev='hda' bus='virtio'/>
	I0920 17:58:20.396398  256536 main.go:141] libmachine: (ha-347193)     </disk>
	I0920 17:58:20.396413  256536 main.go:141] libmachine: (ha-347193)     <interface type='network'>
	I0920 17:58:20.396427  256536 main.go:141] libmachine: (ha-347193)       <source network='mk-ha-347193'/>
	I0920 17:58:20.396437  256536 main.go:141] libmachine: (ha-347193)       <model type='virtio'/>
	I0920 17:58:20.396449  256536 main.go:141] libmachine: (ha-347193)     </interface>
	I0920 17:58:20.396460  256536 main.go:141] libmachine: (ha-347193)     <interface type='network'>
	I0920 17:58:20.396470  256536 main.go:141] libmachine: (ha-347193)       <source network='default'/>
	I0920 17:58:20.396484  256536 main.go:141] libmachine: (ha-347193)       <model type='virtio'/>
	I0920 17:58:20.396495  256536 main.go:141] libmachine: (ha-347193)     </interface>
	I0920 17:58:20.396502  256536 main.go:141] libmachine: (ha-347193)     <serial type='pty'>
	I0920 17:58:20.396514  256536 main.go:141] libmachine: (ha-347193)       <target port='0'/>
	I0920 17:58:20.396524  256536 main.go:141] libmachine: (ha-347193)     </serial>
	I0920 17:58:20.396535  256536 main.go:141] libmachine: (ha-347193)     <console type='pty'>
	I0920 17:58:20.396546  256536 main.go:141] libmachine: (ha-347193)       <target type='serial' port='0'/>
	I0920 17:58:20.396570  256536 main.go:141] libmachine: (ha-347193)     </console>
	I0920 17:58:20.396588  256536 main.go:141] libmachine: (ha-347193)     <rng model='virtio'>
	I0920 17:58:20.396595  256536 main.go:141] libmachine: (ha-347193)       <backend model='random'>/dev/random</backend>
	I0920 17:58:20.396604  256536 main.go:141] libmachine: (ha-347193)     </rng>
	I0920 17:58:20.396635  256536 main.go:141] libmachine: (ha-347193)     
	I0920 17:58:20.396657  256536 main.go:141] libmachine: (ha-347193)     
	I0920 17:58:20.396672  256536 main.go:141] libmachine: (ha-347193)   </devices>
	I0920 17:58:20.396680  256536 main.go:141] libmachine: (ha-347193) </domain>
	I0920 17:58:20.396699  256536 main.go:141] libmachine: (ha-347193) 
	I0920 17:58:20.401190  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:83:b4:8d in network default
	I0920 17:58:20.401745  256536 main.go:141] libmachine: (ha-347193) Ensuring networks are active...
	I0920 17:58:20.401764  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:20.402424  256536 main.go:141] libmachine: (ha-347193) Ensuring network default is active
	I0920 17:58:20.402677  256536 main.go:141] libmachine: (ha-347193) Ensuring network mk-ha-347193 is active
	I0920 17:58:20.403127  256536 main.go:141] libmachine: (ha-347193) Getting domain xml...
	I0920 17:58:20.403705  256536 main.go:141] libmachine: (ha-347193) Creating domain...
	I0920 17:58:21.630872  256536 main.go:141] libmachine: (ha-347193) Waiting to get IP...
	I0920 17:58:21.631658  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:21.632047  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:21.632073  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:21.632024  256559 retry.go:31] will retry after 215.475523ms: waiting for machine to come up
	I0920 17:58:21.849753  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:21.850279  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:21.850310  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:21.850240  256559 retry.go:31] will retry after 263.201454ms: waiting for machine to come up
	I0920 17:58:22.114802  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:22.115310  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:22.115338  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:22.115259  256559 retry.go:31] will retry after 445.148422ms: waiting for machine to come up
	I0920 17:58:22.562073  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:22.562548  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:22.562573  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:22.562510  256559 retry.go:31] will retry after 558.224345ms: waiting for machine to come up
	I0920 17:58:23.122632  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:23.123096  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:23.123123  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:23.123050  256559 retry.go:31] will retry after 528.914105ms: waiting for machine to come up
	I0920 17:58:23.654056  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:23.654437  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:23.654467  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:23.654380  256559 retry.go:31] will retry after 657.509004ms: waiting for machine to come up
	I0920 17:58:24.313318  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:24.313802  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:24.313857  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:24.313765  256559 retry.go:31] will retry after 757.318604ms: waiting for machine to come up
	I0920 17:58:25.072515  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:25.072965  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:25.072995  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:25.072907  256559 retry.go:31] will retry after 1.361384929s: waiting for machine to come up
	I0920 17:58:26.435555  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:26.436017  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:26.436061  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:26.435982  256559 retry.go:31] will retry after 1.541186599s: waiting for machine to come up
	I0920 17:58:27.979940  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:27.980429  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:27.980460  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:27.980357  256559 retry.go:31] will retry after 1.786301166s: waiting for machine to come up
	I0920 17:58:29.767912  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:29.768468  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:29.768491  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:29.768439  256559 retry.go:31] will retry after 1.809883951s: waiting for machine to come up
	I0920 17:58:31.581113  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:31.581588  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:31.581619  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:31.581535  256559 retry.go:31] will retry after 3.405747274s: waiting for machine to come up
	I0920 17:58:34.988932  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:34.989387  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:34.989410  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:34.989369  256559 retry.go:31] will retry after 3.845362816s: waiting for machine to come up
	I0920 17:58:38.839191  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:38.839734  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:38.839759  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:38.839690  256559 retry.go:31] will retry after 3.611631644s: waiting for machine to come up
	I0920 17:58:42.454482  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.454977  256536 main.go:141] libmachine: (ha-347193) Found IP for machine: 192.168.39.246
	I0920 17:58:42.455003  256536 main.go:141] libmachine: (ha-347193) Reserving static IP address...
	I0920 17:58:42.455016  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has current primary IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.455495  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find host DHCP lease matching {name: "ha-347193", mac: "52:54:00:2e:07:bb", ip: "192.168.39.246"} in network mk-ha-347193
	I0920 17:58:42.533022  256536 main.go:141] libmachine: (ha-347193) DBG | Getting to WaitForSSH function...
	I0920 17:58:42.533056  256536 main.go:141] libmachine: (ha-347193) Reserved static IP address: 192.168.39.246
	I0920 17:58:42.533070  256536 main.go:141] libmachine: (ha-347193) Waiting for SSH to be available...
	I0920 17:58:42.535894  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.536329  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:42.536361  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.536501  256536 main.go:141] libmachine: (ha-347193) DBG | Using SSH client type: external
	I0920 17:58:42.536525  256536 main.go:141] libmachine: (ha-347193) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa (-rw-------)
	I0920 17:58:42.536553  256536 main.go:141] libmachine: (ha-347193) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:58:42.536592  256536 main.go:141] libmachine: (ha-347193) DBG | About to run SSH command:
	I0920 17:58:42.536627  256536 main.go:141] libmachine: (ha-347193) DBG | exit 0
	I0920 17:58:42.662095  256536 main.go:141] libmachine: (ha-347193) DBG | SSH cmd err, output: <nil>: 
	I0920 17:58:42.662356  256536 main.go:141] libmachine: (ha-347193) KVM machine creation complete!
	I0920 17:58:42.662742  256536 main.go:141] libmachine: (ha-347193) Calling .GetConfigRaw
	I0920 17:58:42.663393  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:42.663609  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:42.663783  256536 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:58:42.663799  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 17:58:42.665335  256536 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:58:42.665349  256536 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:58:42.665355  256536 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:58:42.665361  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:42.667970  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.668505  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:42.668538  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.668703  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:42.668963  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.669124  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.669264  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:42.669457  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:42.669727  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:42.669743  256536 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:58:42.777219  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:58:42.777243  256536 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:58:42.777251  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:42.779860  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.780225  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:42.780252  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.780402  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:42.780602  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.780743  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.780837  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:42.781037  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:42.781263  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:42.781279  256536 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:58:42.886633  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:58:42.886732  256536 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:58:42.886747  256536 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:58:42.886757  256536 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 17:58:42.887046  256536 buildroot.go:166] provisioning hostname "ha-347193"
	I0920 17:58:42.887073  256536 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 17:58:42.887313  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:42.889831  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.890182  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:42.890207  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.890355  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:42.890545  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.890718  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.890846  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:42.891093  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:42.891253  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:42.891265  256536 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-347193 && echo "ha-347193" | sudo tee /etc/hostname
	I0920 17:58:43.011225  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-347193
	
	I0920 17:58:43.011253  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.014324  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.014803  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.014831  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.015003  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.015234  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.015466  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.015676  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.015888  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:43.016055  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:43.016070  256536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-347193' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-347193/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-347193' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:58:43.130242  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:58:43.130286  256536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 17:58:43.130357  256536 buildroot.go:174] setting up certificates
	I0920 17:58:43.130379  256536 provision.go:84] configureAuth start
	I0920 17:58:43.130401  256536 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 17:58:43.130726  256536 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 17:58:43.133505  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.133825  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.133848  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.134052  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.136401  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.136730  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.136750  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.136952  256536 provision.go:143] copyHostCerts
	I0920 17:58:43.136981  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 17:58:43.137013  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 17:58:43.137030  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 17:58:43.137096  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 17:58:43.137174  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 17:58:43.137193  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 17:58:43.137199  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 17:58:43.137223  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 17:58:43.137264  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 17:58:43.137284  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 17:58:43.137292  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 17:58:43.137312  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 17:58:43.137361  256536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.ha-347193 san=[127.0.0.1 192.168.39.246 ha-347193 localhost minikube]
	I0920 17:58:43.262974  256536 provision.go:177] copyRemoteCerts
	I0920 17:58:43.263055  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:58:43.263085  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.265602  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.265934  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.265962  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.266136  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.266349  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.266507  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.266640  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:58:43.348226  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:58:43.348355  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 17:58:43.371291  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:58:43.371380  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 17:58:43.393409  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:58:43.393490  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:58:43.417165  256536 provision.go:87] duration metric: took 286.759784ms to configureAuth
	I0920 17:58:43.417200  256536 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:58:43.417422  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:58:43.417508  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.420548  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.420826  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.420856  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.421056  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.421256  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.421438  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.421576  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.421745  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:43.422081  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:43.422105  256536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:58:43.638028  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:58:43.638062  256536 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:58:43.638075  256536 main.go:141] libmachine: (ha-347193) Calling .GetURL
	I0920 17:58:43.639465  256536 main.go:141] libmachine: (ha-347193) DBG | Using libvirt version 6000000
	I0920 17:58:43.641835  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.642260  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.642284  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.642472  256536 main.go:141] libmachine: Docker is up and running!
	I0920 17:58:43.642489  256536 main.go:141] libmachine: Reticulating splines...
	I0920 17:58:43.642498  256536 client.go:171] duration metric: took 23.821123659s to LocalClient.Create
	I0920 17:58:43.642520  256536 start.go:167] duration metric: took 23.821189376s to libmachine.API.Create "ha-347193"
	I0920 17:58:43.642527  256536 start.go:293] postStartSetup for "ha-347193" (driver="kvm2")
	I0920 17:58:43.642537  256536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:58:43.642552  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:43.642767  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:58:43.642797  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.645726  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.646207  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.646228  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.646384  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.646562  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.646731  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.646875  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:58:43.732855  256536 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:58:43.737146  256536 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:58:43.737179  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 17:58:43.737266  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 17:58:43.737348  256536 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 17:58:43.737360  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /etc/ssl/certs/2448492.pem
	I0920 17:58:43.737457  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:58:43.746873  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 17:58:43.769680  256536 start.go:296] duration metric: took 127.135312ms for postStartSetup
	I0920 17:58:43.769753  256536 main.go:141] libmachine: (ha-347193) Calling .GetConfigRaw
	I0920 17:58:43.770539  256536 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 17:58:43.773368  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.773790  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.773812  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.774131  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 17:58:43.774327  256536 start.go:128] duration metric: took 23.973205594s to createHost
	I0920 17:58:43.774352  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.776811  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.777154  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.777173  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.777359  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.777566  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.777714  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.777851  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.778046  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:43.778254  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:43.778275  256536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:58:43.886468  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855123.865489975
	
	I0920 17:58:43.886492  256536 fix.go:216] guest clock: 1726855123.865489975
	I0920 17:58:43.886500  256536 fix.go:229] Guest: 2024-09-20 17:58:43.865489975 +0000 UTC Remote: 2024-09-20 17:58:43.77433865 +0000 UTC m=+24.090830996 (delta=91.151325ms)
	I0920 17:58:43.886521  256536 fix.go:200] guest clock delta is within tolerance: 91.151325ms
	I0920 17:58:43.886526  256536 start.go:83] releasing machines lock for "ha-347193", held for 24.085494311s
	I0920 17:58:43.886548  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:43.886838  256536 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 17:58:43.889513  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.889872  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.889896  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.890072  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:43.890584  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:43.890771  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:43.890844  256536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:58:43.890926  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.890977  256536 ssh_runner.go:195] Run: cat /version.json
	I0920 17:58:43.891005  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.893664  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.894009  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.894036  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.894186  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.894206  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.894370  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.894560  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.894569  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.894586  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.894713  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:58:43.894782  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.894935  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.895088  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.895207  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:58:43.976109  256536 ssh_runner.go:195] Run: systemctl --version
	I0920 17:58:44.018728  256536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:58:44.175337  256536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:58:44.181194  256536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:58:44.181279  256536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:58:44.199685  256536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:58:44.199719  256536 start.go:495] detecting cgroup driver to use...
	I0920 17:58:44.199799  256536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:58:44.215955  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:58:44.230482  256536 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:58:44.230549  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:58:44.244728  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:58:44.258137  256536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:58:44.370456  256536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:58:44.514103  256536 docker.go:233] disabling docker service ...
	I0920 17:58:44.514175  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:58:44.536863  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:58:44.550231  256536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:58:44.683486  256536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:58:44.793154  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:58:44.806166  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:58:44.823607  256536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:58:44.823754  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.833725  256536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:58:44.833789  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.843703  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.853327  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.862729  256536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:58:44.872472  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.882312  256536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.898952  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.908482  256536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:58:44.917186  256536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:58:44.917249  256536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:58:44.928614  256536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:58:44.938764  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:58:45.045827  256536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:58:45.135797  256536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:58:45.135868  256536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:58:45.140339  256536 start.go:563] Will wait 60s for crictl version
	I0920 17:58:45.140407  256536 ssh_runner.go:195] Run: which crictl
	I0920 17:58:45.144096  256536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:58:45.187435  256536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:58:45.187543  256536 ssh_runner.go:195] Run: crio --version
	I0920 17:58:45.213699  256536 ssh_runner.go:195] Run: crio --version
	I0920 17:58:45.242965  256536 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:58:45.244260  256536 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 17:58:45.247006  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:45.247310  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:45.247334  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:45.247515  256536 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:58:45.251447  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:58:45.263292  256536 kubeadm.go:883] updating cluster {Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:58:45.263401  256536 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:58:45.263455  256536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:58:45.293889  256536 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 17:58:45.293981  256536 ssh_runner.go:195] Run: which lz4
	I0920 17:58:45.297564  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0920 17:58:45.297677  256536 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 17:58:45.301429  256536 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 17:58:45.301465  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 17:58:46.526820  256536 crio.go:462] duration metric: took 1.229164304s to copy over tarball
	I0920 17:58:46.526906  256536 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 17:58:48.552055  256536 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.025114598s)
	I0920 17:58:48.552091  256536 crio.go:469] duration metric: took 2.025229025s to extract the tarball
	I0920 17:58:48.552101  256536 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 17:58:48.595514  256536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:58:48.637483  256536 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:58:48.637509  256536 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:58:48.637517  256536 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.31.1 crio true true} ...
	I0920 17:58:48.637615  256536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-347193 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:58:48.637681  256536 ssh_runner.go:195] Run: crio config
	I0920 17:58:48.685785  256536 cni.go:84] Creating CNI manager for ""
	I0920 17:58:48.685807  256536 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 17:58:48.685817  256536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:58:48.685841  256536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-347193 NodeName:ha-347193 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:58:48.686000  256536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-347193"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:58:48.686029  256536 kube-vip.go:115] generating kube-vip config ...
	I0920 17:58:48.686069  256536 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:58:48.702147  256536 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:58:48.702255  256536 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:58:48.702306  256536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:58:48.711975  256536 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:58:48.712116  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 17:58:48.721456  256536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 17:58:48.737853  256536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:58:48.754664  256536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 17:58:48.771220  256536 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0920 17:58:48.786667  256536 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:58:48.790274  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:58:48.802824  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:58:48.920298  256536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:58:48.937204  256536 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193 for IP: 192.168.39.246
	I0920 17:58:48.937241  256536 certs.go:194] generating shared ca certs ...
	I0920 17:58:48.937263  256536 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:48.937423  256536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 17:58:48.937475  256536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 17:58:48.937490  256536 certs.go:256] generating profile certs ...
	I0920 17:58:48.937561  256536 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key
	I0920 17:58:48.937579  256536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt with IP's: []
	I0920 17:58:49.084514  256536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt ...
	I0920 17:58:49.084549  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt: {Name:mk13d47d95d81e73445ca468d2d07a6230b36ca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.084751  256536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key ...
	I0920 17:58:49.084769  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key: {Name:mk2e8c8a89fbce74c4a6cf70a50b1649d0b0d470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.084875  256536 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.3b9d2b82
	I0920 17:58:49.084895  256536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.3b9d2b82 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.254]
	I0920 17:58:49.268687  256536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.3b9d2b82 ...
	I0920 17:58:49.268724  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.3b9d2b82: {Name:mkc4d8dcb610e2c55a07bec95a2587e189c4dfcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.268922  256536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.3b9d2b82 ...
	I0920 17:58:49.268941  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.3b9d2b82: {Name:mk97e4ea20b46f77acfe6f051b666b6376a68732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.269045  256536 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.3b9d2b82 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt
	I0920 17:58:49.269140  256536 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.3b9d2b82 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key
	I0920 17:58:49.269224  256536 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key
	I0920 17:58:49.269247  256536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt with IP's: []
	I0920 17:58:49.848819  256536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt ...
	I0920 17:58:49.848866  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt: {Name:mk6162fd8372a3b1149ed5cf0cc51090f3274530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.849075  256536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key ...
	I0920 17:58:49.849088  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key: {Name:mk1d07a6aa2e0b7041a110499c13eb6b4fb89fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.849167  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:58:49.849186  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:58:49.849200  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:58:49.849215  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:58:49.849230  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:58:49.849245  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:58:49.849263  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:58:49.849275  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:58:49.849331  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 17:58:49.849370  256536 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 17:58:49.849382  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:58:49.849407  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 17:58:49.849435  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:58:49.849460  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 17:58:49.849503  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 17:58:49.849533  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:58:49.849550  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem -> /usr/share/ca-certificates/244849.pem
	I0920 17:58:49.849572  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /usr/share/ca-certificates/2448492.pem
	I0920 17:58:49.850129  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:58:49.878422  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:58:49.902242  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:58:49.926391  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 17:58:49.950027  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 17:58:49.972641  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 17:58:49.997022  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:58:50.021804  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:58:50.045879  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:58:50.069136  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 17:58:50.092444  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 17:58:50.116716  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:58:50.136353  256536 ssh_runner.go:195] Run: openssl version
	I0920 17:58:50.145863  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:58:50.157513  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:58:50.162700  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:58:50.162778  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:58:50.168948  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:58:50.180125  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 17:58:50.192366  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 17:58:50.197085  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 17:58:50.197163  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 17:58:50.203424  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 17:58:50.216229  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 17:58:50.228077  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 17:58:50.233241  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 17:58:50.233312  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 17:58:50.240012  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:58:50.251599  256536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:58:50.256160  256536 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:58:50.256224  256536 kubeadm.go:392] StartCluster: {Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:58:50.256322  256536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 17:58:50.256375  256536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 17:58:50.298938  256536 cri.go:89] found id: ""
	I0920 17:58:50.299007  256536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:58:50.309387  256536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:58:50.319684  256536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:58:50.330318  256536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:58:50.330339  256536 kubeadm.go:157] found existing configuration files:
	
	I0920 17:58:50.330388  256536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:58:50.339356  256536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:58:50.339424  256536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:58:50.348952  256536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:58:50.357964  256536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:58:50.358028  256536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:58:50.367163  256536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:58:50.376370  256536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:58:50.376452  256536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:58:50.385926  256536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:58:50.395143  256536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:58:50.395230  256536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:58:50.405341  256536 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 17:58:50.519254  256536 kubeadm.go:310] W0920 17:58:50.504659     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:58:50.520220  256536 kubeadm.go:310] W0920 17:58:50.505817     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:58:50.645093  256536 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:59:01.982945  256536 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 17:59:01.983025  256536 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 17:59:01.983103  256536 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 17:59:01.983216  256536 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 17:59:01.983302  256536 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 17:59:01.983352  256536 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 17:59:01.985269  256536 out.go:235]   - Generating certificates and keys ...
	I0920 17:59:01.985356  256536 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 17:59:01.985409  256536 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 17:59:01.985500  256536 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 17:59:01.985582  256536 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 17:59:01.985647  256536 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 17:59:01.985692  256536 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 17:59:01.985749  256536 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 17:59:01.985852  256536 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-347193 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I0920 17:59:01.985922  256536 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 17:59:01.986042  256536 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-347193 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I0920 17:59:01.986131  256536 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 17:59:01.986209  256536 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 17:59:01.986270  256536 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 17:59:01.986323  256536 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 17:59:01.986367  256536 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 17:59:01.986420  256536 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 17:59:01.986465  256536 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 17:59:01.986546  256536 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 17:59:01.986640  256536 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 17:59:01.986748  256536 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 17:59:01.986815  256536 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 17:59:01.988640  256536 out.go:235]   - Booting up control plane ...
	I0920 17:59:01.988728  256536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 17:59:01.988790  256536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 17:59:01.988846  256536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 17:59:01.988962  256536 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 17:59:01.989082  256536 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 17:59:01.989168  256536 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 17:59:01.989296  256536 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 17:59:01.989387  256536 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 17:59:01.989445  256536 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001806633s
	I0920 17:59:01.989505  256536 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 17:59:01.989576  256536 kubeadm.go:310] [api-check] The API server is healthy after 5.617049153s
	I0920 17:59:01.989696  256536 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 17:59:01.989803  256536 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 17:59:01.989858  256536 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 17:59:01.990057  256536 kubeadm.go:310] [mark-control-plane] Marking the node ha-347193 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 17:59:01.990116  256536 kubeadm.go:310] [bootstrap-token] Using token: copxt9.xhya9dvcru2ncb8u
	I0920 17:59:01.991737  256536 out.go:235]   - Configuring RBAC rules ...
	I0920 17:59:01.991825  256536 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 17:59:01.991930  256536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 17:59:01.992134  256536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 17:59:01.992315  256536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 17:59:01.992430  256536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 17:59:01.992514  256536 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 17:59:01.992624  256536 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 17:59:01.992678  256536 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 17:59:01.992734  256536 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 17:59:01.992741  256536 kubeadm.go:310] 
	I0920 17:59:01.992825  256536 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 17:59:01.992833  256536 kubeadm.go:310] 
	I0920 17:59:01.992910  256536 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 17:59:01.992916  256536 kubeadm.go:310] 
	I0920 17:59:01.992954  256536 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 17:59:01.993039  256536 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 17:59:01.993097  256536 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 17:59:01.993103  256536 kubeadm.go:310] 
	I0920 17:59:01.993147  256536 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 17:59:01.993155  256536 kubeadm.go:310] 
	I0920 17:59:01.993208  256536 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 17:59:01.993218  256536 kubeadm.go:310] 
	I0920 17:59:01.993275  256536 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 17:59:01.993343  256536 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 17:59:01.993400  256536 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 17:59:01.993414  256536 kubeadm.go:310] 
	I0920 17:59:01.993487  256536 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 17:59:01.993558  256536 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 17:59:01.993564  256536 kubeadm.go:310] 
	I0920 17:59:01.993661  256536 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token copxt9.xhya9dvcru2ncb8u \
	I0920 17:59:01.993755  256536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 17:59:01.993786  256536 kubeadm.go:310] 	--control-plane 
	I0920 17:59:01.993795  256536 kubeadm.go:310] 
	I0920 17:59:01.993885  256536 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 17:59:01.993896  256536 kubeadm.go:310] 
	I0920 17:59:01.994008  256536 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token copxt9.xhya9dvcru2ncb8u \
	I0920 17:59:01.994126  256536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 17:59:01.994145  256536 cni.go:84] Creating CNI manager for ""
	I0920 17:59:01.994153  256536 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 17:59:01.995934  256536 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 17:59:01.997387  256536 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 17:59:02.002770  256536 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 17:59:02.002796  256536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 17:59:02.023932  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 17:59:02.397367  256536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 17:59:02.397459  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:02.397493  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-347193 minikube.k8s.io/updated_at=2024_09_20T17_59_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=ha-347193 minikube.k8s.io/primary=true
	I0920 17:59:02.423770  256536 ops.go:34] apiserver oom_adj: -16
	I0920 17:59:02.508023  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:03.008485  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:03.508182  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:04.008435  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:04.508089  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:05.009064  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:05.101282  256536 kubeadm.go:1113] duration metric: took 2.703897001s to wait for elevateKubeSystemPrivileges
	I0920 17:59:05.101325  256536 kubeadm.go:394] duration metric: took 14.845108845s to StartCluster
	I0920 17:59:05.101350  256536 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:59:05.101447  256536 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:59:05.102205  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:59:05.102460  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 17:59:05.102470  256536 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 17:59:05.102452  256536 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:59:05.102580  256536 addons.go:69] Setting default-storageclass=true in profile "ha-347193"
	I0920 17:59:05.102587  256536 start.go:241] waiting for startup goroutines ...
	I0920 17:59:05.102561  256536 addons.go:69] Setting storage-provisioner=true in profile "ha-347193"
	I0920 17:59:05.102601  256536 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-347193"
	I0920 17:59:05.102614  256536 addons.go:234] Setting addon storage-provisioner=true in "ha-347193"
	I0920 17:59:05.102655  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 17:59:05.102708  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:59:05.103073  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.103096  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.103105  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.103128  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.119041  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I0920 17:59:05.119120  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40515
	I0920 17:59:05.119527  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.119535  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.120054  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.120064  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.120077  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.120081  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.120411  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.120459  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.120594  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 17:59:05.120915  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.120945  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.123163  256536 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:59:05.123416  256536 kapi.go:59] client config for ha-347193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key", CAFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 17:59:05.123863  256536 cert_rotation.go:140] Starting client certificate rotation controller
	I0920 17:59:05.124188  256536 addons.go:234] Setting addon default-storageclass=true in "ha-347193"
	I0920 17:59:05.124232  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 17:59:05.124598  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.124630  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.136314  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I0920 17:59:05.136762  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.137268  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.137297  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.137618  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.137833  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 17:59:05.139657  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:59:05.139802  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38355
	I0920 17:59:05.140195  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.140708  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.140736  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.141146  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.141698  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.141724  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.141892  256536 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:59:05.143631  256536 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:59:05.143657  256536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:59:05.143686  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:59:05.146965  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:05.147514  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:59:05.147538  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:05.147705  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:59:05.147909  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:59:05.148047  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:59:05.148204  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:59:05.158393  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39527
	I0920 17:59:05.158953  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.159494  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.159527  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.159919  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.160100  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 17:59:05.161631  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:59:05.161924  256536 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:59:05.161945  256536 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:59:05.161964  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:59:05.164799  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:05.165159  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:59:05.165192  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:05.165404  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:59:05.165619  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:59:05.165790  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:59:05.165962  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:59:05.229095  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:59:05.299511  256536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:59:05.333515  256536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:59:05.572818  256536 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 17:59:05.872829  256536 main.go:141] libmachine: Making call to close driver server
	I0920 17:59:05.872867  256536 main.go:141] libmachine: (ha-347193) Calling .Close
	I0920 17:59:05.872944  256536 main.go:141] libmachine: Making call to close driver server
	I0920 17:59:05.872967  256536 main.go:141] libmachine: (ha-347193) Calling .Close
	I0920 17:59:05.873195  256536 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:59:05.873214  256536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:59:05.873224  256536 main.go:141] libmachine: Making call to close driver server
	I0920 17:59:05.873232  256536 main.go:141] libmachine: (ha-347193) Calling .Close
	I0920 17:59:05.873274  256536 main.go:141] libmachine: (ha-347193) DBG | Closing plugin on server side
	I0920 17:59:05.873310  256536 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:59:05.873317  256536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:59:05.873325  256536 main.go:141] libmachine: Making call to close driver server
	I0920 17:59:05.873332  256536 main.go:141] libmachine: (ha-347193) Calling .Close
	I0920 17:59:05.873517  256536 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:59:05.873541  256536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:59:05.873602  256536 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 17:59:05.873621  256536 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 17:59:05.873624  256536 main.go:141] libmachine: (ha-347193) DBG | Closing plugin on server side
	I0920 17:59:05.873718  256536 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:59:05.873742  256536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:59:05.873751  256536 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0920 17:59:05.873766  256536 round_trippers.go:469] Request Headers:
	I0920 17:59:05.873776  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:59:05.873785  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:59:05.888629  256536 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0920 17:59:05.889182  256536 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0920 17:59:05.889201  256536 round_trippers.go:469] Request Headers:
	I0920 17:59:05.889211  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:59:05.889215  256536 round_trippers.go:473]     Content-Type: application/json
	I0920 17:59:05.889223  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:59:05.892179  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:59:05.892357  256536 main.go:141] libmachine: Making call to close driver server
	I0920 17:59:05.892373  256536 main.go:141] libmachine: (ha-347193) Calling .Close
	I0920 17:59:05.892691  256536 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:59:05.892709  256536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:59:05.894279  256536 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0920 17:59:05.895496  256536 addons.go:510] duration metric: took 793.020671ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0920 17:59:05.895531  256536 start.go:246] waiting for cluster config update ...
	I0920 17:59:05.895542  256536 start.go:255] writing updated cluster config ...
	I0920 17:59:05.897257  256536 out.go:201] 
	I0920 17:59:05.898660  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:59:05.898730  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 17:59:05.900283  256536 out.go:177] * Starting "ha-347193-m02" control-plane node in "ha-347193" cluster
	I0920 17:59:05.901396  256536 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:59:05.901420  256536 cache.go:56] Caching tarball of preloaded images
	I0920 17:59:05.901510  256536 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:59:05.901521  256536 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:59:05.901597  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 17:59:05.901759  256536 start.go:360] acquireMachinesLock for ha-347193-m02: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:59:05.901802  256536 start.go:364] duration metric: took 24.671µs to acquireMachinesLock for "ha-347193-m02"
	I0920 17:59:05.901820  256536 start.go:93] Provisioning new machine with config: &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:59:05.901885  256536 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0920 17:59:05.903637  256536 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:59:05.903736  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.903765  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.919718  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37733
	I0920 17:59:05.920256  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.920760  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.920783  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.921213  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.921446  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetMachineName
	I0920 17:59:05.921623  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:05.921862  256536 start.go:159] libmachine.API.Create for "ha-347193" (driver="kvm2")
	I0920 17:59:05.921894  256536 client.go:168] LocalClient.Create starting
	I0920 17:59:05.921946  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 17:59:05.921992  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 17:59:05.922017  256536 main.go:141] libmachine: Parsing certificate...
	I0920 17:59:05.922095  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 17:59:05.922126  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 17:59:05.922142  256536 main.go:141] libmachine: Parsing certificate...
	I0920 17:59:05.922169  256536 main.go:141] libmachine: Running pre-create checks...
	I0920 17:59:05.922181  256536 main.go:141] libmachine: (ha-347193-m02) Calling .PreCreateCheck
	I0920 17:59:05.922398  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetConfigRaw
	I0920 17:59:05.922898  256536 main.go:141] libmachine: Creating machine...
	I0920 17:59:05.922915  256536 main.go:141] libmachine: (ha-347193-m02) Calling .Create
	I0920 17:59:05.923043  256536 main.go:141] libmachine: (ha-347193-m02) Creating KVM machine...
	I0920 17:59:05.924563  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found existing default KVM network
	I0920 17:59:05.924648  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found existing private KVM network mk-ha-347193
	I0920 17:59:05.924819  256536 main.go:141] libmachine: (ha-347193-m02) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02 ...
	I0920 17:59:05.924844  256536 main.go:141] libmachine: (ha-347193-m02) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 17:59:05.924904  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:05.924790  256915 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:59:05.925011  256536 main.go:141] libmachine: (ha-347193-m02) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 17:59:06.216167  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:06.216027  256915 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa...
	I0920 17:59:06.325597  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:06.325412  256915 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/ha-347193-m02.rawdisk...
	I0920 17:59:06.325640  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Writing magic tar header
	I0920 17:59:06.325658  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Writing SSH key tar header
	I0920 17:59:06.325672  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:06.325581  256915 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02 ...
	I0920 17:59:06.325689  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02
	I0920 17:59:06.325740  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02 (perms=drwx------)
	I0920 17:59:06.325762  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 17:59:06.325774  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:59:06.325786  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:59:06.325801  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 17:59:06.325822  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 17:59:06.325834  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:59:06.325857  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 17:59:06.325886  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:59:06.325897  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:59:06.325927  256536 main.go:141] libmachine: (ha-347193-m02) Creating domain...
	I0920 17:59:06.325957  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:59:06.325971  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home
	I0920 17:59:06.325982  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Skipping /home - not owner
	I0920 17:59:06.327271  256536 main.go:141] libmachine: (ha-347193-m02) define libvirt domain using xml: 
	I0920 17:59:06.327300  256536 main.go:141] libmachine: (ha-347193-m02) <domain type='kvm'>
	I0920 17:59:06.327310  256536 main.go:141] libmachine: (ha-347193-m02)   <name>ha-347193-m02</name>
	I0920 17:59:06.327317  256536 main.go:141] libmachine: (ha-347193-m02)   <memory unit='MiB'>2200</memory>
	I0920 17:59:06.327324  256536 main.go:141] libmachine: (ha-347193-m02)   <vcpu>2</vcpu>
	I0920 17:59:06.327330  256536 main.go:141] libmachine: (ha-347193-m02)   <features>
	I0920 17:59:06.327339  256536 main.go:141] libmachine: (ha-347193-m02)     <acpi/>
	I0920 17:59:06.327347  256536 main.go:141] libmachine: (ha-347193-m02)     <apic/>
	I0920 17:59:06.327356  256536 main.go:141] libmachine: (ha-347193-m02)     <pae/>
	I0920 17:59:06.327366  256536 main.go:141] libmachine: (ha-347193-m02)     
	I0920 17:59:06.327375  256536 main.go:141] libmachine: (ha-347193-m02)   </features>
	I0920 17:59:06.327386  256536 main.go:141] libmachine: (ha-347193-m02)   <cpu mode='host-passthrough'>
	I0920 17:59:06.327396  256536 main.go:141] libmachine: (ha-347193-m02)   
	I0920 17:59:06.327411  256536 main.go:141] libmachine: (ha-347193-m02)   </cpu>
	I0920 17:59:06.327426  256536 main.go:141] libmachine: (ha-347193-m02)   <os>
	I0920 17:59:06.327438  256536 main.go:141] libmachine: (ha-347193-m02)     <type>hvm</type>
	I0920 17:59:06.327452  256536 main.go:141] libmachine: (ha-347193-m02)     <boot dev='cdrom'/>
	I0920 17:59:06.327463  256536 main.go:141] libmachine: (ha-347193-m02)     <boot dev='hd'/>
	I0920 17:59:06.327471  256536 main.go:141] libmachine: (ha-347193-m02)     <bootmenu enable='no'/>
	I0920 17:59:06.327482  256536 main.go:141] libmachine: (ha-347193-m02)   </os>
	I0920 17:59:06.327490  256536 main.go:141] libmachine: (ha-347193-m02)   <devices>
	I0920 17:59:06.327501  256536 main.go:141] libmachine: (ha-347193-m02)     <disk type='file' device='cdrom'>
	I0920 17:59:06.327515  256536 main.go:141] libmachine: (ha-347193-m02)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/boot2docker.iso'/>
	I0920 17:59:06.327544  256536 main.go:141] libmachine: (ha-347193-m02)       <target dev='hdc' bus='scsi'/>
	I0920 17:59:06.327569  256536 main.go:141] libmachine: (ha-347193-m02)       <readonly/>
	I0920 17:59:06.327578  256536 main.go:141] libmachine: (ha-347193-m02)     </disk>
	I0920 17:59:06.327587  256536 main.go:141] libmachine: (ha-347193-m02)     <disk type='file' device='disk'>
	I0920 17:59:06.327597  256536 main.go:141] libmachine: (ha-347193-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:59:06.327607  256536 main.go:141] libmachine: (ha-347193-m02)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/ha-347193-m02.rawdisk'/>
	I0920 17:59:06.327619  256536 main.go:141] libmachine: (ha-347193-m02)       <target dev='hda' bus='virtio'/>
	I0920 17:59:06.327627  256536 main.go:141] libmachine: (ha-347193-m02)     </disk>
	I0920 17:59:06.327635  256536 main.go:141] libmachine: (ha-347193-m02)     <interface type='network'>
	I0920 17:59:06.327649  256536 main.go:141] libmachine: (ha-347193-m02)       <source network='mk-ha-347193'/>
	I0920 17:59:06.327659  256536 main.go:141] libmachine: (ha-347193-m02)       <model type='virtio'/>
	I0920 17:59:06.327669  256536 main.go:141] libmachine: (ha-347193-m02)     </interface>
	I0920 17:59:06.327680  256536 main.go:141] libmachine: (ha-347193-m02)     <interface type='network'>
	I0920 17:59:06.327690  256536 main.go:141] libmachine: (ha-347193-m02)       <source network='default'/>
	I0920 17:59:06.327701  256536 main.go:141] libmachine: (ha-347193-m02)       <model type='virtio'/>
	I0920 17:59:06.327711  256536 main.go:141] libmachine: (ha-347193-m02)     </interface>
	I0920 17:59:06.327722  256536 main.go:141] libmachine: (ha-347193-m02)     <serial type='pty'>
	I0920 17:59:06.327737  256536 main.go:141] libmachine: (ha-347193-m02)       <target port='0'/>
	I0920 17:59:06.327748  256536 main.go:141] libmachine: (ha-347193-m02)     </serial>
	I0920 17:59:06.327761  256536 main.go:141] libmachine: (ha-347193-m02)     <console type='pty'>
	I0920 17:59:06.327773  256536 main.go:141] libmachine: (ha-347193-m02)       <target type='serial' port='0'/>
	I0920 17:59:06.327786  256536 main.go:141] libmachine: (ha-347193-m02)     </console>
	I0920 17:59:06.327797  256536 main.go:141] libmachine: (ha-347193-m02)     <rng model='virtio'>
	I0920 17:59:06.327808  256536 main.go:141] libmachine: (ha-347193-m02)       <backend model='random'>/dev/random</backend>
	I0920 17:59:06.327819  256536 main.go:141] libmachine: (ha-347193-m02)     </rng>
	I0920 17:59:06.327825  256536 main.go:141] libmachine: (ha-347193-m02)     
	I0920 17:59:06.327833  256536 main.go:141] libmachine: (ha-347193-m02)     
	I0920 17:59:06.327840  256536 main.go:141] libmachine: (ha-347193-m02)   </devices>
	I0920 17:59:06.327847  256536 main.go:141] libmachine: (ha-347193-m02) </domain>
	I0920 17:59:06.327853  256536 main.go:141] libmachine: (ha-347193-m02) 
	I0920 17:59:06.335776  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:99:8b:51 in network default
	I0920 17:59:06.336465  256536 main.go:141] libmachine: (ha-347193-m02) Ensuring networks are active...
	I0920 17:59:06.336495  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:06.337274  256536 main.go:141] libmachine: (ha-347193-m02) Ensuring network default is active
	I0920 17:59:06.337717  256536 main.go:141] libmachine: (ha-347193-m02) Ensuring network mk-ha-347193 is active
	I0920 17:59:06.338271  256536 main.go:141] libmachine: (ha-347193-m02) Getting domain xml...
	I0920 17:59:06.339065  256536 main.go:141] libmachine: (ha-347193-m02) Creating domain...
	I0920 17:59:07.590103  256536 main.go:141] libmachine: (ha-347193-m02) Waiting to get IP...
	I0920 17:59:07.591029  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:07.591430  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:07.591465  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:07.591414  256915 retry.go:31] will retry after 226.007564ms: waiting for machine to come up
	I0920 17:59:07.819128  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:07.819593  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:07.819618  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:07.819539  256915 retry.go:31] will retry after 341.961936ms: waiting for machine to come up
	I0920 17:59:08.163271  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:08.163762  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:08.163842  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:08.163725  256915 retry.go:31] will retry after 303.677068ms: waiting for machine to come up
	I0920 17:59:08.469231  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:08.469723  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:08.469751  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:08.469670  256915 retry.go:31] will retry after 590.358913ms: waiting for machine to come up
	I0920 17:59:09.061444  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:09.061930  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:09.061952  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:09.061882  256915 retry.go:31] will retry after 511.282935ms: waiting for machine to come up
	I0920 17:59:09.574742  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:09.575187  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:09.575214  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:09.575124  256915 retry.go:31] will retry after 856.972258ms: waiting for machine to come up
	I0920 17:59:10.434260  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:10.434831  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:10.434853  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:10.434774  256915 retry.go:31] will retry after 836.344709ms: waiting for machine to come up
	I0920 17:59:11.273284  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:11.274041  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:11.274078  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:11.273981  256915 retry.go:31] will retry after 1.355754749s: waiting for machine to come up
	I0920 17:59:12.631596  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:12.631994  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:12.632021  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:12.631955  256915 retry.go:31] will retry after 1.6398171s: waiting for machine to come up
	I0920 17:59:14.273660  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:14.274139  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:14.274166  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:14.274082  256915 retry.go:31] will retry after 2.299234308s: waiting for machine to come up
	I0920 17:59:16.575079  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:16.575516  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:16.575545  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:16.575474  256915 retry.go:31] will retry after 2.142102972s: waiting for machine to come up
	I0920 17:59:18.720889  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:18.721374  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:18.721401  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:18.721344  256915 retry.go:31] will retry after 2.537816732s: waiting for machine to come up
	I0920 17:59:21.261045  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:21.261472  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:21.261500  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:21.261409  256915 retry.go:31] will retry after 3.610609319s: waiting for machine to come up
	I0920 17:59:24.876357  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:24.876860  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:24.876882  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:24.876825  256915 retry.go:31] will retry after 4.700561987s: waiting for machine to come up
	I0920 17:59:29.581568  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.582102  256536 main.go:141] libmachine: (ha-347193-m02) Found IP for machine: 192.168.39.241
	I0920 17:59:29.582125  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has current primary IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.582131  256536 main.go:141] libmachine: (ha-347193-m02) Reserving static IP address...
	I0920 17:59:29.582608  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find host DHCP lease matching {name: "ha-347193-m02", mac: "52:54:00:2a:a9:ec", ip: "192.168.39.241"} in network mk-ha-347193
	I0920 17:59:29.662003  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Getting to WaitForSSH function...
	I0920 17:59:29.662037  256536 main.go:141] libmachine: (ha-347193-m02) Reserved static IP address: 192.168.39.241
	I0920 17:59:29.662058  256536 main.go:141] libmachine: (ha-347193-m02) Waiting for SSH to be available...
	I0920 17:59:29.666033  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.666545  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:29.666582  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.666603  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Using SSH client type: external
	I0920 17:59:29.666618  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa (-rw-------)
	I0920 17:59:29.666652  256536 main.go:141] libmachine: (ha-347193-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:59:29.666668  256536 main.go:141] libmachine: (ha-347193-m02) DBG | About to run SSH command:
	I0920 17:59:29.666675  256536 main.go:141] libmachine: (ha-347193-m02) DBG | exit 0
	I0920 17:59:29.794185  256536 main.go:141] libmachine: (ha-347193-m02) DBG | SSH cmd err, output: <nil>: 
	I0920 17:59:29.794474  256536 main.go:141] libmachine: (ha-347193-m02) KVM machine creation complete!
	I0920 17:59:29.794737  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetConfigRaw
	I0920 17:59:29.795327  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:29.795609  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:29.795784  256536 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:59:29.795797  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetState
	I0920 17:59:29.797225  256536 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:59:29.797243  256536 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:59:29.797249  256536 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:59:29.797255  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:29.799913  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.800263  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:29.800285  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.800414  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:29.800599  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:29.800763  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:29.800897  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:29.801057  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:29.801269  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:29.801282  256536 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:59:29.909222  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:59:29.909246  256536 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:59:29.909255  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:29.912190  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.912743  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:29.912765  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.913023  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:29.913242  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:29.913432  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:29.913591  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:29.913750  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:29.913984  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:29.913999  256536 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:59:30.022466  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:59:30.022546  256536 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:59:30.022558  256536 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:59:30.022572  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetMachineName
	I0920 17:59:30.022864  256536 buildroot.go:166] provisioning hostname "ha-347193-m02"
	I0920 17:59:30.022888  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetMachineName
	I0920 17:59:30.023065  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.025530  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.025878  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.025926  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.026023  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.026228  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.026416  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.026576  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.026730  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:30.026894  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:30.026904  256536 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-347193-m02 && echo "ha-347193-m02" | sudo tee /etc/hostname
	I0920 17:59:30.147982  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-347193-m02
	
	I0920 17:59:30.148028  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.151033  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.151386  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.151409  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.151586  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.151765  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.151945  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.152170  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.152401  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:30.152590  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:30.152607  256536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-347193-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-347193-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-347193-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:59:30.271015  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:59:30.271057  256536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 17:59:30.271078  256536 buildroot.go:174] setting up certificates
	I0920 17:59:30.271087  256536 provision.go:84] configureAuth start
	I0920 17:59:30.271097  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetMachineName
	I0920 17:59:30.271410  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetIP
	I0920 17:59:30.273849  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.274342  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.274365  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.274563  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.277006  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.277328  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.277355  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.277454  256536 provision.go:143] copyHostCerts
	I0920 17:59:30.277493  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 17:59:30.277528  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 17:59:30.277538  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 17:59:30.277621  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 17:59:30.277724  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 17:59:30.277753  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 17:59:30.277763  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 17:59:30.277802  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 17:59:30.277864  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 17:59:30.277886  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 17:59:30.277894  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 17:59:30.277955  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 17:59:30.278028  256536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.ha-347193-m02 san=[127.0.0.1 192.168.39.241 ha-347193-m02 localhost minikube]
	I0920 17:59:30.390911  256536 provision.go:177] copyRemoteCerts
	I0920 17:59:30.390984  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:59:30.391016  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.394282  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.394669  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.394705  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.394848  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.395053  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.395190  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.395311  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa Username:docker}
	I0920 17:59:30.480101  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:59:30.480183  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:59:30.504430  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:59:30.504533  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:59:30.532508  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:59:30.532609  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 17:59:30.555072  256536 provision.go:87] duration metric: took 283.968068ms to configureAuth
	I0920 17:59:30.555106  256536 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:59:30.555298  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:59:30.555382  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.558201  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.558658  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.558688  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.558891  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.559083  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.559260  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.559393  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.559554  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:30.559783  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:30.559809  256536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:59:30.779495  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:59:30.779542  256536 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:59:30.779553  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetURL
	I0920 17:59:30.780879  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Using libvirt version 6000000
	I0920 17:59:30.782959  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.783290  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.783321  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.783453  256536 main.go:141] libmachine: Docker is up and running!
	I0920 17:59:30.783468  256536 main.go:141] libmachine: Reticulating splines...
	I0920 17:59:30.783477  256536 client.go:171] duration metric: took 24.8615738s to LocalClient.Create
	I0920 17:59:30.783506  256536 start.go:167] duration metric: took 24.861646798s to libmachine.API.Create "ha-347193"
	I0920 17:59:30.783518  256536 start.go:293] postStartSetup for "ha-347193-m02" (driver="kvm2")
	I0920 17:59:30.783531  256536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:59:30.783550  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:30.783789  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:59:30.783813  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.786027  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.786349  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.786370  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.786628  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.786815  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.786993  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.787118  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa Username:docker}
	I0920 17:59:30.872345  256536 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:59:30.876519  256536 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:59:30.876550  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 17:59:30.876627  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 17:59:30.876702  256536 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 17:59:30.876712  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /etc/ssl/certs/2448492.pem
	I0920 17:59:30.876794  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:59:30.886441  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 17:59:30.909455  256536 start.go:296] duration metric: took 125.914203ms for postStartSetup
	I0920 17:59:30.909530  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetConfigRaw
	I0920 17:59:30.910141  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetIP
	I0920 17:59:30.912668  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.912976  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.913008  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.913233  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 17:59:30.913434  256536 start.go:128] duration metric: took 25.011535523s to createHost
	I0920 17:59:30.913460  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.915700  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.915987  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.916010  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.916226  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.916424  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.916603  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.916761  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.916950  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:30.917155  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:30.917166  256536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:59:31.026461  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855170.997739673
	
	I0920 17:59:31.026489  256536 fix.go:216] guest clock: 1726855170.997739673
	I0920 17:59:31.026496  256536 fix.go:229] Guest: 2024-09-20 17:59:30.997739673 +0000 UTC Remote: 2024-09-20 17:59:30.913448056 +0000 UTC m=+71.229940404 (delta=84.291617ms)
	I0920 17:59:31.026512  256536 fix.go:200] guest clock delta is within tolerance: 84.291617ms
	I0920 17:59:31.026517  256536 start.go:83] releasing machines lock for "ha-347193-m02", held for 25.124707242s
	I0920 17:59:31.026538  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:31.026839  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetIP
	I0920 17:59:31.029757  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.030179  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:31.030206  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.032445  256536 out.go:177] * Found network options:
	I0920 17:59:31.034196  256536 out.go:177]   - NO_PROXY=192.168.39.246
	W0920 17:59:31.035224  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:59:31.035267  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:31.035792  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:31.035991  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:31.036100  256536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:59:31.036143  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	W0920 17:59:31.036175  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:59:31.036267  256536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:59:31.036294  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:31.039153  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.039466  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.039563  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:31.039596  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.039727  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:31.039878  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:31.039897  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.039909  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:31.040048  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:31.040104  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:31.040219  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa Username:docker}
	I0920 17:59:31.040318  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:31.040480  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:31.040634  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa Username:docker}
	I0920 17:59:31.274255  256536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:59:31.280374  256536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:59:31.280441  256536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:59:31.296955  256536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:59:31.296987  256536 start.go:495] detecting cgroup driver to use...
	I0920 17:59:31.297127  256536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:59:31.313543  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:59:31.328017  256536 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:59:31.328096  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:59:31.341962  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:59:31.355931  256536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:59:31.467597  256536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:59:31.622972  256536 docker.go:233] disabling docker service ...
	I0920 17:59:31.623069  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:59:31.637011  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:59:31.649605  256536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:59:31.771555  256536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:59:31.885423  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:59:31.898889  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:59:31.916477  256536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:59:31.916540  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.926444  256536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:59:31.926525  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.937116  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.947355  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.957415  256536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:59:31.968385  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.979172  256536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.996319  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:32.006541  256536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:59:32.015815  256536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:59:32.015883  256536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:59:32.028240  256536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:59:32.037972  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:59:32.152278  256536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:59:32.246123  256536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:59:32.246218  256536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:59:32.251023  256536 start.go:563] Will wait 60s for crictl version
	I0920 17:59:32.251119  256536 ssh_runner.go:195] Run: which crictl
	I0920 17:59:32.254625  256536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:59:32.289498  256536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:59:32.289579  256536 ssh_runner.go:195] Run: crio --version
	I0920 17:59:32.316659  256536 ssh_runner.go:195] Run: crio --version
	I0920 17:59:32.344869  256536 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:59:32.346085  256536 out.go:177]   - env NO_PROXY=192.168.39.246
	I0920 17:59:32.347420  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetIP
	I0920 17:59:32.350776  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:32.351141  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:32.351172  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:32.351449  256536 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:59:32.355587  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:59:32.367465  256536 mustload.go:65] Loading cluster: ha-347193
	I0920 17:59:32.367713  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:59:32.368030  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:32.368075  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:32.383118  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0920 17:59:32.383676  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:32.384195  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:32.384214  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:32.384600  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:32.384841  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 17:59:32.386464  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 17:59:32.386753  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:32.386789  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:32.402199  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35537
	I0920 17:59:32.402698  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:32.403237  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:32.403260  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:32.403569  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:32.403791  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:59:32.403932  256536 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193 for IP: 192.168.39.241
	I0920 17:59:32.403945  256536 certs.go:194] generating shared ca certs ...
	I0920 17:59:32.403966  256536 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:59:32.404125  256536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 17:59:32.404172  256536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 17:59:32.404185  256536 certs.go:256] generating profile certs ...
	I0920 17:59:32.404277  256536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key
	I0920 17:59:32.404313  256536 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.32ffe274
	I0920 17:59:32.404333  256536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.32ffe274 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.241 192.168.39.254]
	I0920 17:59:32.510440  256536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.32ffe274 ...
	I0920 17:59:32.510475  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.32ffe274: {Name:mkc30548db6e83d8832ed460ef3ecdc3101e5f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:59:32.510691  256536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.32ffe274 ...
	I0920 17:59:32.510711  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.32ffe274: {Name:mk355121b8c4a956d860782a1b0c1370e7e6b83b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:59:32.510815  256536 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.32ffe274 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt
	I0920 17:59:32.510982  256536 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.32ffe274 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key
	I0920 17:59:32.511155  256536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key
	I0920 17:59:32.511179  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:59:32.511194  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:59:32.511205  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:59:32.511220  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:59:32.511234  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:59:32.511253  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:59:32.511269  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:59:32.511287  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:59:32.511357  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 17:59:32.511396  256536 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 17:59:32.511405  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:59:32.511438  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 17:59:32.511471  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:59:32.511501  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 17:59:32.511554  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 17:59:32.511594  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /usr/share/ca-certificates/2448492.pem
	I0920 17:59:32.511618  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:59:32.511638  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem -> /usr/share/ca-certificates/244849.pem
	I0920 17:59:32.511683  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:59:32.515008  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:32.515405  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:59:32.515433  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:32.515642  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:59:32.515847  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:59:32.515999  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:59:32.516117  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:59:32.590305  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 17:59:32.595442  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 17:59:32.607284  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 17:59:32.611399  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0920 17:59:32.622339  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 17:59:32.626371  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 17:59:32.636850  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 17:59:32.640553  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0920 17:59:32.651329  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 17:59:32.655163  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 17:59:32.666449  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 17:59:32.670985  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0920 17:59:32.681916  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:59:32.706099  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:59:32.733293  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:59:32.756993  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 17:59:32.781045  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 17:59:32.804602  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 17:59:32.829390  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:59:32.854727  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:59:32.878575  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 17:59:32.902198  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:59:32.926004  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 17:59:32.950687  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 17:59:32.966783  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0920 17:59:32.982858  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 17:59:32.998897  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0920 17:59:33.015096  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 17:59:33.030999  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0920 17:59:33.046670  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 17:59:33.063118  256536 ssh_runner.go:195] Run: openssl version
	I0920 17:59:33.068899  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 17:59:33.079939  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 17:59:33.084424  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 17:59:33.084485  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 17:59:33.090249  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:59:33.100697  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:59:33.111242  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:59:33.115679  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:59:33.115779  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:59:33.121728  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:59:33.132827  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 17:59:33.144204  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 17:59:33.148909  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 17:59:33.149013  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 17:59:33.155176  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 17:59:33.167680  256536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:59:33.171844  256536 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:59:33.171909  256536 kubeadm.go:934] updating node {m02 192.168.39.241 8443 v1.31.1 crio true true} ...
	I0920 17:59:33.172010  256536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-347193-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:59:33.172048  256536 kube-vip.go:115] generating kube-vip config ...
	I0920 17:59:33.172096  256536 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:59:33.188452  256536 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:59:33.188534  256536 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:59:33.188596  256536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:59:33.200215  256536 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 17:59:33.200283  256536 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 17:59:33.211876  256536 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 17:59:33.211910  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:59:33.211977  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:59:33.211977  256536 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0920 17:59:33.211976  256536 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0920 17:59:33.216444  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 17:59:33.216484  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 17:59:34.138597  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:59:34.138688  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:59:34.143879  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 17:59:34.143926  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 17:59:34.359690  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:59:34.385444  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:59:34.385565  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:59:34.390030  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 17:59:34.390071  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 17:59:34.700597  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 17:59:34.710043  256536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 17:59:34.726628  256536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:59:34.743032  256536 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 17:59:34.758894  256536 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:59:34.762912  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:59:34.775241  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:59:34.903828  256536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:59:34.920877  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 17:59:34.921370  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:34.921427  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:34.936803  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
	I0920 17:59:34.937329  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:34.937858  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:34.937878  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:34.938232  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:34.938485  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:59:34.938651  256536 start.go:317] joinCluster: &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:59:34.938783  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 17:59:34.938806  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:59:34.942213  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:34.942681  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:59:34.942710  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:34.942970  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:59:34.943133  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:59:34.943329  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:59:34.943450  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:59:35.091635  256536 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:59:35.091698  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u7ake0.3opk6636yb6nqfez --discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-347193-m02 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443"
	I0920 17:59:58.407521  256536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u7ake0.3opk6636yb6nqfez --discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-347193-m02 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443": (23.315793188s)
	I0920 17:59:58.407571  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 17:59:58.935865  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-347193-m02 minikube.k8s.io/updated_at=2024_09_20T17_59_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=ha-347193 minikube.k8s.io/primary=false
	I0920 17:59:59.078065  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-347193-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 17:59:59.202785  256536 start.go:319] duration metric: took 24.264127262s to joinCluster
	I0920 17:59:59.202881  256536 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:59:59.203156  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:59:59.204855  256536 out.go:177] * Verifying Kubernetes components...
	I0920 17:59:59.206648  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:59:59.459291  256536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:59:59.534641  256536 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:59:59.534924  256536 kapi.go:59] client config for ha-347193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key", CAFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 17:59:59.534997  256536 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I0920 17:59:59.535231  256536 node_ready.go:35] waiting up to 6m0s for node "ha-347193-m02" to be "Ready" ...
	I0920 17:59:59.535334  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 17:59:59.535343  256536 round_trippers.go:469] Request Headers:
	I0920 17:59:59.535354  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:59:59.535362  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:59:59.550229  256536 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0920 18:00:00.035883  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:00.035909  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:00.035928  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:00.035932  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:00.046596  256536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0920 18:00:00.535658  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:00.535691  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:00.535702  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:00.535709  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:00.541409  256536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:00:01.035971  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:01.036006  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:01.036018  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:01.036024  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:01.040150  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:01.536089  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:01.536113  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:01.536123  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:01.536128  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:01.540239  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:01.540746  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:02.036207  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:02.036234  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:02.036250  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:02.036253  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:02.040514  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:02.535543  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:02.535572  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:02.535585  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:02.535591  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:02.541651  256536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 18:00:03.035563  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:03.035589  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:03.035598  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:03.035606  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:03.039108  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:03.535979  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:03.536001  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:03.536009  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:03.536019  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:03.539926  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:04.035710  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:04.035734  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:04.035743  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:04.035746  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:04.039659  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:04.040156  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:04.535537  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:04.535559  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:04.535572  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:04.535575  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:04.540040  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:05.036185  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:05.036211  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:05.036222  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:05.036229  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:05.040132  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:05.536445  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:05.536515  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:05.536529  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:05.536535  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:05.539954  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:06.036190  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:06.036217  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:06.036228  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:06.036235  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:06.039984  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:06.040529  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:06.535732  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:06.535756  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:06.535765  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:06.535769  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:06.539264  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:07.036241  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:07.036266  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:07.036274  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:07.036278  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:07.040942  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:07.535952  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:07.535977  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:07.535986  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:07.535989  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:07.539355  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:08.036196  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:08.036223  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:08.036231  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:08.036235  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:08.039851  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:08.535561  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:08.535589  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:08.535603  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:08.535609  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:08.540000  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:08.540484  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:09.035653  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:09.035683  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:09.035692  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:09.035695  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:09.039339  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:09.536386  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:09.536410  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:09.536421  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:09.536427  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:09.539675  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:10.036302  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:10.036335  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:10.036347  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:10.036352  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:10.039818  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:10.535749  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:10.535778  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:10.535787  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:10.535792  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:10.539640  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:11.036020  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:11.036050  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:11.036060  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:11.036066  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:11.039525  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:11.040266  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:11.535666  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:11.535691  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:11.535697  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:11.535700  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:11.538988  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:12.036243  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:12.036277  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:12.036285  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:12.036289  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:12.040685  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:12.535894  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:12.535923  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:12.535931  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:12.535936  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:12.539877  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:13.036023  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:13.036052  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:13.036062  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:13.036068  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:13.039752  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:13.040483  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:13.535855  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:13.535883  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:13.535894  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:13.535899  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:13.539399  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:14.036503  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:14.036530  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:14.036539  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:14.036542  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:14.040297  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:14.536446  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:14.536477  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:14.536489  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:14.536496  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:14.539974  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:15.036448  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:15.036478  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:15.036489  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:15.036495  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:15.040620  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:15.041167  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:15.535516  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:15.535545  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:15.535553  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:15.535559  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:15.539083  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:16.036510  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:16.036537  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:16.036546  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:16.036549  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:16.041085  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:16.535826  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:16.535849  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:16.535861  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:16.535865  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:16.539059  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:17.036117  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:17.036144  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:17.036153  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:17.036160  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:17.040478  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:17.535518  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:17.535543  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:17.535552  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:17.535556  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:17.540491  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:17.541065  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:18.035427  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:18.035454  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.035462  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.035467  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.039556  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:18.040741  256536 node_ready.go:49] node "ha-347193-m02" has status "Ready":"True"
	I0920 18:00:18.040773  256536 node_ready.go:38] duration metric: took 18.505523491s for node "ha-347193-m02" to be "Ready" ...
	I0920 18:00:18.040784  256536 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:00:18.040932  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:00:18.040941  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.040957  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.040962  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.046873  256536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:00:18.054373  256536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.054477  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6llmd
	I0920 18:00:18.054485  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.054492  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.054496  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.058597  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:18.060016  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:18.060034  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.060042  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.060047  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.062721  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.063302  256536 pod_ready.go:93] pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.063326  256536 pod_ready.go:82] duration metric: took 8.921017ms for pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.063339  256536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.063419  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bkmhn
	I0920 18:00:18.063429  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.063437  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.063442  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.065673  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.066345  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:18.066361  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.066368  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.066372  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.068535  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.068957  256536 pod_ready.go:93] pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.068975  256536 pod_ready.go:82] duration metric: took 5.629047ms for pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.068985  256536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.069042  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193
	I0920 18:00:18.069050  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.069058  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.069064  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.071215  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.071725  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:18.071741  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.071748  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.071752  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.076248  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:18.076783  256536 pod_ready.go:93] pod "etcd-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.076809  256536 pod_ready.go:82] duration metric: took 7.814986ms for pod "etcd-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.076822  256536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.076903  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193-m02
	I0920 18:00:18.076913  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.076933  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.076942  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.079425  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.080041  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:18.080062  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.080070  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.080073  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.082658  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.083080  256536 pod_ready.go:93] pod "etcd-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.083098  256536 pod_ready.go:82] duration metric: took 6.269137ms for pod "etcd-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.083120  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.235451  256536 request.go:632] Waited for 152.265053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193
	I0920 18:00:18.235515  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193
	I0920 18:00:18.235520  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.235529  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.235538  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.239325  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:18.436436  256536 request.go:632] Waited for 196.38005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:18.436497  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:18.436502  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.436510  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.436513  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.439995  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:18.440920  256536 pod_ready.go:93] pod "kube-apiserver-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.440944  256536 pod_ready.go:82] duration metric: took 357.817605ms for pod "kube-apiserver-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.440954  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.636140  256536 request.go:632] Waited for 195.087959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m02
	I0920 18:00:18.636243  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m02
	I0920 18:00:18.636255  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.636268  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.636280  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.640087  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:18.836246  256536 request.go:632] Waited for 195.361959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:18.836311  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:18.836316  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.836323  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.836328  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.840653  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:18.841777  256536 pod_ready.go:93] pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.841799  256536 pod_ready.go:82] duration metric: took 400.83724ms for pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.841809  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:19.036009  256536 request.go:632] Waited for 194.129324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193
	I0920 18:00:19.036093  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193
	I0920 18:00:19.036098  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:19.036106  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:19.036111  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:19.039737  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:19.236270  256536 request.go:632] Waited for 195.455754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:19.236346  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:19.236354  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:19.236365  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:19.236373  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:19.241800  256536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:00:19.242348  256536 pod_ready.go:93] pod "kube-controller-manager-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:19.242373  256536 pod_ready.go:82] duration metric: took 400.554651ms for pod "kube-controller-manager-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:19.242385  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:19.436357  256536 request.go:632] Waited for 193.884621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m02
	I0920 18:00:19.436449  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m02
	I0920 18:00:19.436463  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:19.436474  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:19.436485  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:19.446510  256536 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0920 18:00:19.635563  256536 request.go:632] Waited for 188.301909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:19.635648  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:19.635653  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:19.635661  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:19.635665  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:19.639157  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:19.639875  256536 pod_ready.go:93] pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:19.639909  256536 pod_ready.go:82] duration metric: took 397.513343ms for pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:19.639925  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ffdvq" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:19.836481  256536 request.go:632] Waited for 196.456867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ffdvq
	I0920 18:00:19.836549  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ffdvq
	I0920 18:00:19.836555  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:19.836563  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:19.836568  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:19.840480  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:20.036151  256536 request.go:632] Waited for 194.863834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:20.036217  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:20.036230  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:20.036238  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:20.036242  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:20.040324  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:20.040897  256536 pod_ready.go:93] pod "kube-proxy-ffdvq" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:20.040926  256536 pod_ready.go:82] duration metric: took 400.990573ms for pod "kube-proxy-ffdvq" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:20.040940  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rdqkg" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:20.235885  256536 request.go:632] Waited for 194.862598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdqkg
	I0920 18:00:20.235966  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdqkg
	I0920 18:00:20.235973  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:20.235983  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:20.235989  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:20.239847  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:20.436319  256536 request.go:632] Waited for 195.461517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:20.436386  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:20.436391  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:20.436399  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:20.436403  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:20.440218  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:20.440901  256536 pod_ready.go:93] pod "kube-proxy-rdqkg" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:20.440935  256536 pod_ready.go:82] duration metric: took 399.983159ms for pod "kube-proxy-rdqkg" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:20.440946  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:20.636078  256536 request.go:632] Waited for 195.028076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193
	I0920 18:00:20.636162  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193
	I0920 18:00:20.636181  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:20.636193  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:20.636206  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:20.639813  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:20.835867  256536 request.go:632] Waited for 195.433474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:20.835962  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:20.835968  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:20.835976  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:20.835982  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:20.839792  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:20.840650  256536 pod_ready.go:93] pod "kube-scheduler-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:20.840681  256536 pod_ready.go:82] duration metric: took 399.725704ms for pod "kube-scheduler-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:20.840695  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:21.036247  256536 request.go:632] Waited for 195.4677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m02
	I0920 18:00:21.036330  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m02
	I0920 18:00:21.036335  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.036344  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.036348  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.040845  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:21.235815  256536 request.go:632] Waited for 194.360469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:21.235904  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:21.235911  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.235921  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.235928  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.239741  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:21.240157  256536 pod_ready.go:93] pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:21.240181  256536 pod_ready.go:82] duration metric: took 399.476235ms for pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:21.240195  256536 pod_ready.go:39] duration metric: took 3.199359276s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:00:21.240216  256536 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:00:21.240276  256536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:00:21.258549  256536 api_server.go:72] duration metric: took 22.055620378s to wait for apiserver process to appear ...
	I0920 18:00:21.258580  256536 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:00:21.258610  256536 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I0920 18:00:21.263626  256536 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I0920 18:00:21.263706  256536 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I0920 18:00:21.263711  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.263719  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.263724  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.265005  256536 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0920 18:00:21.265129  256536 api_server.go:141] control plane version: v1.31.1
	I0920 18:00:21.265148  256536 api_server.go:131] duration metric: took 6.561205ms to wait for apiserver health ...
	I0920 18:00:21.265155  256536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:00:21.435532  256536 request.go:632] Waited for 170.291625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:00:21.435621  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:00:21.435628  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.435636  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.435639  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.442020  256536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 18:00:21.446425  256536 system_pods.go:59] 17 kube-system pods found
	I0920 18:00:21.446458  256536 system_pods.go:61] "coredns-7c65d6cfc9-6llmd" [8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92] Running
	I0920 18:00:21.446463  256536 system_pods.go:61] "coredns-7c65d6cfc9-bkmhn" [f7862a6e-54cc-450c-b283-d20fb99f51ce] Running
	I0920 18:00:21.446467  256536 system_pods.go:61] "etcd-ha-347193" [e13fc198-b02b-4f0a-bf76-be0f519d9d57] Running
	I0920 18:00:21.446470  256536 system_pods.go:61] "etcd-ha-347193-m02" [4ea69953-b35a-4ae9-8153-cea3be5e2c1c] Running
	I0920 18:00:21.446473  256536 system_pods.go:61] "kindnet-cqbxl" [3d49a6b1-5be5-4d96-98e3-bd05035a2d1b] Running
	I0920 18:00:21.446478  256536 system_pods.go:61] "kindnet-z24zp" [9271d251-2d95-4b23-85f3-7da6567b2fc3] Running
	I0920 18:00:21.446482  256536 system_pods.go:61] "kube-apiserver-ha-347193" [993ccf05-a39a-42b4-b82d-936531325dc4] Running
	I0920 18:00:21.446485  256536 system_pods.go:61] "kube-apiserver-ha-347193-m02" [43cd77b8-8925-4a04-a8cf-1b9a0cbbc502] Running
	I0920 18:00:21.446489  256536 system_pods.go:61] "kube-controller-manager-ha-347193" [6de3a14b-6587-45d4-aaee-1256b9c327cc] Running
	I0920 18:00:21.446492  256536 system_pods.go:61] "kube-controller-manager-ha-347193-m02" [cdf3f4d7-0675-4c59-8ad5-8901104d71c3] Running
	I0920 18:00:21.446495  256536 system_pods.go:61] "kube-proxy-ffdvq" [97120f62-0af2-405a-b8ff-639c72a39a2d] Running
	I0920 18:00:21.446500  256536 system_pods.go:61] "kube-proxy-rdqkg" [d9ae4e37-b29b-400a-af2d-544da4024069] Running
	I0920 18:00:21.446502  256536 system_pods.go:61] "kube-scheduler-ha-347193" [910baa0e-404e-4ac7-9262-848672eaf9cf] Running
	I0920 18:00:21.446505  256536 system_pods.go:61] "kube-scheduler-ha-347193-m02" [623b9c3b-b998-4516-a53e-17e9d8970594] Running
	I0920 18:00:21.446508  256536 system_pods.go:61] "kube-vip-ha-347193" [20d6faa4-600f-4bd0-8acb-1f95c047da58] Running
	I0920 18:00:21.446511  256536 system_pods.go:61] "kube-vip-ha-347193-m02" [1455826c-7b3d-40f7-bb15-a9861ee95e19] Running
	I0920 18:00:21.446516  256536 system_pods.go:61] "storage-provisioner" [8924f7ce-85a0-4587-9c05-8a74c7113e9e] Running
	I0920 18:00:21.446521  256536 system_pods.go:74] duration metric: took 181.36053ms to wait for pod list to return data ...
	I0920 18:00:21.446528  256536 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:00:21.636065  256536 request.go:632] Waited for 189.405126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:00:21.636135  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:00:21.636141  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.636148  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.636153  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.640839  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:21.641122  256536 default_sa.go:45] found service account: "default"
	I0920 18:00:21.641142  256536 default_sa.go:55] duration metric: took 194.607217ms for default service account to be created ...
	I0920 18:00:21.641151  256536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:00:21.835580  256536 request.go:632] Waited for 194.337083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:00:21.835675  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:00:21.835682  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.835689  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.835693  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.841225  256536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:00:21.846004  256536 system_pods.go:86] 17 kube-system pods found
	I0920 18:00:21.846039  256536 system_pods.go:89] "coredns-7c65d6cfc9-6llmd" [8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92] Running
	I0920 18:00:21.846046  256536 system_pods.go:89] "coredns-7c65d6cfc9-bkmhn" [f7862a6e-54cc-450c-b283-d20fb99f51ce] Running
	I0920 18:00:21.846051  256536 system_pods.go:89] "etcd-ha-347193" [e13fc198-b02b-4f0a-bf76-be0f519d9d57] Running
	I0920 18:00:21.846055  256536 system_pods.go:89] "etcd-ha-347193-m02" [4ea69953-b35a-4ae9-8153-cea3be5e2c1c] Running
	I0920 18:00:21.846059  256536 system_pods.go:89] "kindnet-cqbxl" [3d49a6b1-5be5-4d96-98e3-bd05035a2d1b] Running
	I0920 18:00:21.846062  256536 system_pods.go:89] "kindnet-z24zp" [9271d251-2d95-4b23-85f3-7da6567b2fc3] Running
	I0920 18:00:21.846066  256536 system_pods.go:89] "kube-apiserver-ha-347193" [993ccf05-a39a-42b4-b82d-936531325dc4] Running
	I0920 18:00:21.846070  256536 system_pods.go:89] "kube-apiserver-ha-347193-m02" [43cd77b8-8925-4a04-a8cf-1b9a0cbbc502] Running
	I0920 18:00:21.846074  256536 system_pods.go:89] "kube-controller-manager-ha-347193" [6de3a14b-6587-45d4-aaee-1256b9c327cc] Running
	I0920 18:00:21.846078  256536 system_pods.go:89] "kube-controller-manager-ha-347193-m02" [cdf3f4d7-0675-4c59-8ad5-8901104d71c3] Running
	I0920 18:00:21.846082  256536 system_pods.go:89] "kube-proxy-ffdvq" [97120f62-0af2-405a-b8ff-639c72a39a2d] Running
	I0920 18:00:21.846085  256536 system_pods.go:89] "kube-proxy-rdqkg" [d9ae4e37-b29b-400a-af2d-544da4024069] Running
	I0920 18:00:21.846089  256536 system_pods.go:89] "kube-scheduler-ha-347193" [910baa0e-404e-4ac7-9262-848672eaf9cf] Running
	I0920 18:00:21.846093  256536 system_pods.go:89] "kube-scheduler-ha-347193-m02" [623b9c3b-b998-4516-a53e-17e9d8970594] Running
	I0920 18:00:21.846097  256536 system_pods.go:89] "kube-vip-ha-347193" [20d6faa4-600f-4bd0-8acb-1f95c047da58] Running
	I0920 18:00:21.846108  256536 system_pods.go:89] "kube-vip-ha-347193-m02" [1455826c-7b3d-40f7-bb15-a9861ee95e19] Running
	I0920 18:00:21.846111  256536 system_pods.go:89] "storage-provisioner" [8924f7ce-85a0-4587-9c05-8a74c7113e9e] Running
	I0920 18:00:21.846118  256536 system_pods.go:126] duration metric: took 204.961033ms to wait for k8s-apps to be running ...
	I0920 18:00:21.846127  256536 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:00:21.846175  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:00:21.862644  256536 system_svc.go:56] duration metric: took 16.499746ms WaitForService to wait for kubelet
	I0920 18:00:21.862683  256536 kubeadm.go:582] duration metric: took 22.659763297s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:00:21.862708  256536 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:00:22.036245  256536 request.go:632] Waited for 173.422886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I0920 18:00:22.036330  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I0920 18:00:22.036338  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:22.036349  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:22.036357  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:22.040138  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:22.040911  256536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:00:22.040940  256536 node_conditions.go:123] node cpu capacity is 2
	I0920 18:00:22.040957  256536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:00:22.040962  256536 node_conditions.go:123] node cpu capacity is 2
	I0920 18:00:22.040967  256536 node_conditions.go:105] duration metric: took 178.253105ms to run NodePressure ...
	I0920 18:00:22.040983  256536 start.go:241] waiting for startup goroutines ...
	I0920 18:00:22.041015  256536 start.go:255] writing updated cluster config ...
	I0920 18:00:22.043512  256536 out.go:201] 
	I0920 18:00:22.045235  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:00:22.045367  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 18:00:22.047395  256536 out.go:177] * Starting "ha-347193-m03" control-plane node in "ha-347193" cluster
	I0920 18:00:22.048977  256536 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:00:22.049012  256536 cache.go:56] Caching tarball of preloaded images
	I0920 18:00:22.049136  256536 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:00:22.049148  256536 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:00:22.049248  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 18:00:22.049435  256536 start.go:360] acquireMachinesLock for ha-347193-m03: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:00:22.049481  256536 start.go:364] duration metric: took 26µs to acquireMachinesLock for "ha-347193-m03"
	I0920 18:00:22.049501  256536 start.go:93] Provisioning new machine with config: &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:00:22.049631  256536 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0920 18:00:22.051727  256536 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:00:22.051867  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:00:22.051912  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:00:22.067720  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40165
	I0920 18:00:22.068325  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:00:22.068884  256536 main.go:141] libmachine: Using API Version  1
	I0920 18:00:22.068907  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:00:22.069270  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:00:22.069481  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetMachineName
	I0920 18:00:22.069638  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:22.069845  256536 start.go:159] libmachine.API.Create for "ha-347193" (driver="kvm2")
	I0920 18:00:22.069873  256536 client.go:168] LocalClient.Create starting
	I0920 18:00:22.069933  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 18:00:22.069978  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 18:00:22.069993  256536 main.go:141] libmachine: Parsing certificate...
	I0920 18:00:22.070053  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 18:00:22.070073  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 18:00:22.070084  256536 main.go:141] libmachine: Parsing certificate...
	I0920 18:00:22.070099  256536 main.go:141] libmachine: Running pre-create checks...
	I0920 18:00:22.070107  256536 main.go:141] libmachine: (ha-347193-m03) Calling .PreCreateCheck
	I0920 18:00:22.070282  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetConfigRaw
	I0920 18:00:22.070730  256536 main.go:141] libmachine: Creating machine...
	I0920 18:00:22.070742  256536 main.go:141] libmachine: (ha-347193-m03) Calling .Create
	I0920 18:00:22.070908  256536 main.go:141] libmachine: (ha-347193-m03) Creating KVM machine...
	I0920 18:00:22.072409  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found existing default KVM network
	I0920 18:00:22.072583  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found existing private KVM network mk-ha-347193
	I0920 18:00:22.072739  256536 main.go:141] libmachine: (ha-347193-m03) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03 ...
	I0920 18:00:22.072765  256536 main.go:141] libmachine: (ha-347193-m03) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:00:22.072834  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:22.072724  257331 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:00:22.072916  256536 main.go:141] libmachine: (ha-347193-m03) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:00:22.338205  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:22.338046  257331 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa...
	I0920 18:00:22.401743  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:22.401600  257331 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/ha-347193-m03.rawdisk...
	I0920 18:00:22.401769  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Writing magic tar header
	I0920 18:00:22.401826  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Writing SSH key tar header
	I0920 18:00:22.401856  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:22.401719  257331 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03 ...
	I0920 18:00:22.401875  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03 (perms=drwx------)
	I0920 18:00:22.401895  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:00:22.401963  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 18:00:22.401981  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03
	I0920 18:00:22.401996  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 18:00:22.402006  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:00:22.402019  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 18:00:22.402031  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 18:00:22.402043  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:00:22.402054  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:00:22.402064  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home
	I0920 18:00:22.402077  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Skipping /home - not owner
	I0920 18:00:22.402112  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:00:22.402132  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:00:22.402145  256536 main.go:141] libmachine: (ha-347193-m03) Creating domain...
	I0920 18:00:22.403163  256536 main.go:141] libmachine: (ha-347193-m03) define libvirt domain using xml: 
	I0920 18:00:22.403182  256536 main.go:141] libmachine: (ha-347193-m03) <domain type='kvm'>
	I0920 18:00:22.403192  256536 main.go:141] libmachine: (ha-347193-m03)   <name>ha-347193-m03</name>
	I0920 18:00:22.403198  256536 main.go:141] libmachine: (ha-347193-m03)   <memory unit='MiB'>2200</memory>
	I0920 18:00:22.403205  256536 main.go:141] libmachine: (ha-347193-m03)   <vcpu>2</vcpu>
	I0920 18:00:22.403215  256536 main.go:141] libmachine: (ha-347193-m03)   <features>
	I0920 18:00:22.403225  256536 main.go:141] libmachine: (ha-347193-m03)     <acpi/>
	I0920 18:00:22.403233  256536 main.go:141] libmachine: (ha-347193-m03)     <apic/>
	I0920 18:00:22.403245  256536 main.go:141] libmachine: (ha-347193-m03)     <pae/>
	I0920 18:00:22.403253  256536 main.go:141] libmachine: (ha-347193-m03)     
	I0920 18:00:22.403263  256536 main.go:141] libmachine: (ha-347193-m03)   </features>
	I0920 18:00:22.403273  256536 main.go:141] libmachine: (ha-347193-m03)   <cpu mode='host-passthrough'>
	I0920 18:00:22.403286  256536 main.go:141] libmachine: (ha-347193-m03)   
	I0920 18:00:22.403296  256536 main.go:141] libmachine: (ha-347193-m03)   </cpu>
	I0920 18:00:22.403305  256536 main.go:141] libmachine: (ha-347193-m03)   <os>
	I0920 18:00:22.403315  256536 main.go:141] libmachine: (ha-347193-m03)     <type>hvm</type>
	I0920 18:00:22.403326  256536 main.go:141] libmachine: (ha-347193-m03)     <boot dev='cdrom'/>
	I0920 18:00:22.403336  256536 main.go:141] libmachine: (ha-347193-m03)     <boot dev='hd'/>
	I0920 18:00:22.403346  256536 main.go:141] libmachine: (ha-347193-m03)     <bootmenu enable='no'/>
	I0920 18:00:22.403355  256536 main.go:141] libmachine: (ha-347193-m03)   </os>
	I0920 18:00:22.403364  256536 main.go:141] libmachine: (ha-347193-m03)   <devices>
	I0920 18:00:22.403375  256536 main.go:141] libmachine: (ha-347193-m03)     <disk type='file' device='cdrom'>
	I0920 18:00:22.403406  256536 main.go:141] libmachine: (ha-347193-m03)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/boot2docker.iso'/>
	I0920 18:00:22.403432  256536 main.go:141] libmachine: (ha-347193-m03)       <target dev='hdc' bus='scsi'/>
	I0920 18:00:22.403442  256536 main.go:141] libmachine: (ha-347193-m03)       <readonly/>
	I0920 18:00:22.403452  256536 main.go:141] libmachine: (ha-347193-m03)     </disk>
	I0920 18:00:22.403465  256536 main.go:141] libmachine: (ha-347193-m03)     <disk type='file' device='disk'>
	I0920 18:00:22.403477  256536 main.go:141] libmachine: (ha-347193-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:00:22.403493  256536 main.go:141] libmachine: (ha-347193-m03)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/ha-347193-m03.rawdisk'/>
	I0920 18:00:22.403506  256536 main.go:141] libmachine: (ha-347193-m03)       <target dev='hda' bus='virtio'/>
	I0920 18:00:22.403515  256536 main.go:141] libmachine: (ha-347193-m03)     </disk>
	I0920 18:00:22.403522  256536 main.go:141] libmachine: (ha-347193-m03)     <interface type='network'>
	I0920 18:00:22.403530  256536 main.go:141] libmachine: (ha-347193-m03)       <source network='mk-ha-347193'/>
	I0920 18:00:22.403537  256536 main.go:141] libmachine: (ha-347193-m03)       <model type='virtio'/>
	I0920 18:00:22.403545  256536 main.go:141] libmachine: (ha-347193-m03)     </interface>
	I0920 18:00:22.403554  256536 main.go:141] libmachine: (ha-347193-m03)     <interface type='network'>
	I0920 18:00:22.403563  256536 main.go:141] libmachine: (ha-347193-m03)       <source network='default'/>
	I0920 18:00:22.403572  256536 main.go:141] libmachine: (ha-347193-m03)       <model type='virtio'/>
	I0920 18:00:22.403580  256536 main.go:141] libmachine: (ha-347193-m03)     </interface>
	I0920 18:00:22.403598  256536 main.go:141] libmachine: (ha-347193-m03)     <serial type='pty'>
	I0920 18:00:22.403608  256536 main.go:141] libmachine: (ha-347193-m03)       <target port='0'/>
	I0920 18:00:22.403614  256536 main.go:141] libmachine: (ha-347193-m03)     </serial>
	I0920 18:00:22.403626  256536 main.go:141] libmachine: (ha-347193-m03)     <console type='pty'>
	I0920 18:00:22.403638  256536 main.go:141] libmachine: (ha-347193-m03)       <target type='serial' port='0'/>
	I0920 18:00:22.403648  256536 main.go:141] libmachine: (ha-347193-m03)     </console>
	I0920 18:00:22.403655  256536 main.go:141] libmachine: (ha-347193-m03)     <rng model='virtio'>
	I0920 18:00:22.403665  256536 main.go:141] libmachine: (ha-347193-m03)       <backend model='random'>/dev/random</backend>
	I0920 18:00:22.403669  256536 main.go:141] libmachine: (ha-347193-m03)     </rng>
	I0920 18:00:22.403674  256536 main.go:141] libmachine: (ha-347193-m03)     
	I0920 18:00:22.403680  256536 main.go:141] libmachine: (ha-347193-m03)     
	I0920 18:00:22.403685  256536 main.go:141] libmachine: (ha-347193-m03)   </devices>
	I0920 18:00:22.403691  256536 main.go:141] libmachine: (ha-347193-m03) </domain>
	I0920 18:00:22.403701  256536 main.go:141] libmachine: (ha-347193-m03) 
	I0920 18:00:22.411929  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:7f:8c:82 in network default
	I0920 18:00:22.412669  256536 main.go:141] libmachine: (ha-347193-m03) Ensuring networks are active...
	I0920 18:00:22.412689  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:22.413649  256536 main.go:141] libmachine: (ha-347193-m03) Ensuring network default is active
	I0920 18:00:22.414029  256536 main.go:141] libmachine: (ha-347193-m03) Ensuring network mk-ha-347193 is active
	I0920 18:00:22.414605  256536 main.go:141] libmachine: (ha-347193-m03) Getting domain xml...
	I0920 18:00:22.415371  256536 main.go:141] libmachine: (ha-347193-m03) Creating domain...
	I0920 18:00:23.690471  256536 main.go:141] libmachine: (ha-347193-m03) Waiting to get IP...
	I0920 18:00:23.691341  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:23.691801  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:23.691826  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:23.691771  257331 retry.go:31] will retry after 305.28803ms: waiting for machine to come up
	I0920 18:00:23.998411  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:23.999018  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:23.999037  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:23.998982  257331 retry.go:31] will retry after 325.282486ms: waiting for machine to come up
	I0920 18:00:24.325459  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:24.326038  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:24.326064  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:24.325997  257331 retry.go:31] will retry after 443.699467ms: waiting for machine to come up
	I0920 18:00:24.771839  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:24.772332  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:24.772360  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:24.772272  257331 retry.go:31] will retry after 425.456586ms: waiting for machine to come up
	I0920 18:00:25.199046  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:25.199733  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:25.199762  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:25.199691  257331 retry.go:31] will retry after 471.75067ms: waiting for machine to come up
	I0920 18:00:25.673494  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:25.674017  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:25.674046  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:25.673921  257331 retry.go:31] will retry after 587.223627ms: waiting for machine to come up
	I0920 18:00:26.262671  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:26.263313  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:26.263345  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:26.263252  257331 retry.go:31] will retry after 883.317566ms: waiting for machine to come up
	I0920 18:00:27.148800  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:27.149230  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:27.149252  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:27.149182  257331 retry.go:31] will retry after 1.299880509s: waiting for machine to come up
	I0920 18:00:28.450607  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:28.451213  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:28.451237  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:28.451146  257331 retry.go:31] will retry after 1.154105376s: waiting for machine to come up
	I0920 18:00:29.607236  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:29.607729  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:29.607762  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:29.607684  257331 retry.go:31] will retry after 1.399507975s: waiting for machine to come up
	I0920 18:00:31.009117  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:31.009614  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:31.009645  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:31.009556  257331 retry.go:31] will retry after 2.255483173s: waiting for machine to come up
	I0920 18:00:33.266732  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:33.267250  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:33.267280  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:33.267181  257331 retry.go:31] will retry after 3.331108113s: waiting for machine to come up
	I0920 18:00:36.602825  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:36.603313  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:36.603336  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:36.603267  257331 retry.go:31] will retry after 4.086437861s: waiting for machine to come up
	I0920 18:00:40.692990  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:40.693433  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:40.693462  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:40.693375  257331 retry.go:31] will retry after 5.025372778s: waiting for machine to come up
	I0920 18:00:45.723079  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.723614  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has current primary IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.723644  256536 main.go:141] libmachine: (ha-347193-m03) Found IP for machine: 192.168.39.250
	I0920 18:00:45.723658  256536 main.go:141] libmachine: (ha-347193-m03) Reserving static IP address...
	I0920 18:00:45.724041  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find host DHCP lease matching {name: "ha-347193-m03", mac: "52:54:00:80:1a:4c", ip: "192.168.39.250"} in network mk-ha-347193
	I0920 18:00:45.808270  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Getting to WaitForSSH function...
	I0920 18:00:45.808305  256536 main.go:141] libmachine: (ha-347193-m03) Reserved static IP address: 192.168.39.250
	I0920 18:00:45.808317  256536 main.go:141] libmachine: (ha-347193-m03) Waiting for SSH to be available...
	I0920 18:00:45.811196  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.811660  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:45.811697  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.811825  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Using SSH client type: external
	I0920 18:00:45.811848  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa (-rw-------)
	I0920 18:00:45.811941  256536 main.go:141] libmachine: (ha-347193-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:00:45.811975  256536 main.go:141] libmachine: (ha-347193-m03) DBG | About to run SSH command:
	I0920 18:00:45.811991  256536 main.go:141] libmachine: (ha-347193-m03) DBG | exit 0
	I0920 18:00:45.942448  256536 main.go:141] libmachine: (ha-347193-m03) DBG | SSH cmd err, output: <nil>: 
	I0920 18:00:45.942757  256536 main.go:141] libmachine: (ha-347193-m03) KVM machine creation complete!
	I0920 18:00:45.943036  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetConfigRaw
	I0920 18:00:45.943611  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:45.943802  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:45.943956  256536 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:00:45.943968  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetState
	I0920 18:00:45.945108  256536 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:00:45.945127  256536 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:00:45.945134  256536 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:00:45.945143  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:45.947795  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.948180  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:45.948212  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.948362  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:45.948540  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:45.948731  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:45.948909  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:45.949088  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:45.949376  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:45.949397  256536 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:00:46.053564  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:00:46.053620  256536 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:00:46.053632  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.056590  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.057022  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.057055  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.057256  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:46.057474  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.057655  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.057801  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:46.058159  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:46.058349  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:46.058359  256536 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:00:46.162650  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:00:46.162739  256536 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:00:46.162750  256536 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:00:46.162759  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetMachineName
	I0920 18:00:46.163059  256536 buildroot.go:166] provisioning hostname "ha-347193-m03"
	I0920 18:00:46.163088  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetMachineName
	I0920 18:00:46.163316  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.166267  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.166667  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.166690  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.166891  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:46.167092  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.167331  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.167501  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:46.167710  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:46.167873  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:46.167885  256536 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-347193-m03 && echo "ha-347193-m03" | sudo tee /etc/hostname
	I0920 18:00:46.284161  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-347193-m03
	
	I0920 18:00:46.284194  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.287604  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.288162  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.288212  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.288377  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:46.288598  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.288781  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.288997  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:46.289164  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:46.289333  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:46.289348  256536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-347193-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-347193-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-347193-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:00:46.403249  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:00:46.403284  256536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 18:00:46.403312  256536 buildroot.go:174] setting up certificates
	I0920 18:00:46.403323  256536 provision.go:84] configureAuth start
	I0920 18:00:46.403334  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetMachineName
	I0920 18:00:46.403661  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetIP
	I0920 18:00:46.407072  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.407456  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.407507  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.407605  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.410105  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.410437  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.410474  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.410693  256536 provision.go:143] copyHostCerts
	I0920 18:00:46.410731  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:00:46.410776  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 18:00:46.410788  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:00:46.410872  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 18:00:46.410969  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:00:46.410999  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 18:00:46.411009  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:00:46.411048  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 18:00:46.411112  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:00:46.411134  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 18:00:46.411141  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:00:46.411174  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 18:00:46.411245  256536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.ha-347193-m03 san=[127.0.0.1 192.168.39.250 ha-347193-m03 localhost minikube]
	I0920 18:00:46.589496  256536 provision.go:177] copyRemoteCerts
	I0920 18:00:46.589576  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:00:46.589611  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.592753  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.593174  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.593204  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.593452  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:46.593684  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.593864  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:46.594009  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa Username:docker}
	I0920 18:00:46.676664  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:00:46.676774  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:00:46.702866  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:00:46.702960  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:00:46.728033  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:00:46.728125  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:00:46.752902  256536 provision.go:87] duration metric: took 349.552078ms to configureAuth
	I0920 18:00:46.752934  256536 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:00:46.753136  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:00:46.753210  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.755906  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.756375  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.756398  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.756668  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:46.756899  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.757160  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.757332  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:46.757510  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:46.757706  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:46.757726  256536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:00:46.996420  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:00:46.996456  256536 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:00:46.996468  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetURL
	I0920 18:00:46.998173  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Using libvirt version 6000000
	I0920 18:00:47.000536  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.000948  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.001005  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.001175  256536 main.go:141] libmachine: Docker is up and running!
	I0920 18:00:47.001193  256536 main.go:141] libmachine: Reticulating splines...
	I0920 18:00:47.001204  256536 client.go:171] duration metric: took 24.931317889s to LocalClient.Create
	I0920 18:00:47.001232  256536 start.go:167] duration metric: took 24.931386973s to libmachine.API.Create "ha-347193"
	I0920 18:00:47.001245  256536 start.go:293] postStartSetup for "ha-347193-m03" (driver="kvm2")
	I0920 18:00:47.001262  256536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:00:47.001288  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:47.001582  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:00:47.001615  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:47.005636  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.006217  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.006249  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.006471  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:47.006730  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:47.006897  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:47.007131  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa Username:docker}
	I0920 18:00:47.088575  256536 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:00:47.093116  256536 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:00:47.093144  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 18:00:47.093215  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 18:00:47.093286  256536 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 18:00:47.093296  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /etc/ssl/certs/2448492.pem
	I0920 18:00:47.093380  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:00:47.103343  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:00:47.129139  256536 start.go:296] duration metric: took 127.87289ms for postStartSetup
	I0920 18:00:47.129196  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetConfigRaw
	I0920 18:00:47.129896  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetIP
	I0920 18:00:47.132942  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.133411  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.133437  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.133773  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 18:00:47.134091  256536 start.go:128] duration metric: took 25.084442035s to createHost
	I0920 18:00:47.134127  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:47.136774  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.137134  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.137159  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.137348  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:47.137616  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:47.137786  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:47.137992  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:47.138197  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:47.138375  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:47.138386  256536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:00:47.242925  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855247.221790500
	
	I0920 18:00:47.242952  256536 fix.go:216] guest clock: 1726855247.221790500
	I0920 18:00:47.242962  256536 fix.go:229] Guest: 2024-09-20 18:00:47.2217905 +0000 UTC Remote: 2024-09-20 18:00:47.134109422 +0000 UTC m=+147.450601767 (delta=87.681078ms)
	I0920 18:00:47.242983  256536 fix.go:200] guest clock delta is within tolerance: 87.681078ms
	I0920 18:00:47.242988  256536 start.go:83] releasing machines lock for "ha-347193-m03", held for 25.193498164s
	I0920 18:00:47.243006  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:47.243300  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetIP
	I0920 18:00:47.246354  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.246809  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.246844  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.249405  256536 out.go:177] * Found network options:
	I0920 18:00:47.251083  256536 out.go:177]   - NO_PROXY=192.168.39.246,192.168.39.241
	W0920 18:00:47.252536  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 18:00:47.252563  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:00:47.252582  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:47.253272  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:47.253546  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:47.253662  256536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:00:47.253727  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	W0920 18:00:47.253771  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 18:00:47.253799  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:00:47.253880  256536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:00:47.253928  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:47.256829  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.256923  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.257208  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.257233  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.257309  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.257347  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.257407  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:47.257616  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:47.257619  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:47.257870  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:47.257875  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:47.258038  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa Username:docker}
	I0920 18:00:47.258107  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:47.258329  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa Username:docker}
	I0920 18:00:47.495115  256536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:00:47.501076  256536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:00:47.501151  256536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:00:47.517330  256536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:00:47.517360  256536 start.go:495] detecting cgroup driver to use...
	I0920 18:00:47.517421  256536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:00:47.534608  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:00:47.549798  256536 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:00:47.549868  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:00:47.564991  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:00:47.580654  256536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:00:47.705785  256536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:00:47.870467  256536 docker.go:233] disabling docker service ...
	I0920 18:00:47.870543  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:00:47.889659  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:00:47.904008  256536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:00:48.037069  256536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:00:48.172437  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:00:48.186077  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:00:48.205661  256536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:00:48.205724  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.216421  256536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:00:48.216509  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.228291  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.239306  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.249763  256536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:00:48.260784  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.271597  256536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.290072  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.301232  256536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:00:48.311548  256536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:00:48.311624  256536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:00:48.327406  256536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:00:48.338454  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:00:48.463827  256536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:00:48.563927  256536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:00:48.564016  256536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:00:48.569050  256536 start.go:563] Will wait 60s for crictl version
	I0920 18:00:48.569137  256536 ssh_runner.go:195] Run: which crictl
	I0920 18:00:48.573089  256536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:00:48.612882  256536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:00:48.612989  256536 ssh_runner.go:195] Run: crio --version
	I0920 18:00:48.641884  256536 ssh_runner.go:195] Run: crio --version
	I0920 18:00:48.674772  256536 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:00:48.676208  256536 out.go:177]   - env NO_PROXY=192.168.39.246
	I0920 18:00:48.677575  256536 out.go:177]   - env NO_PROXY=192.168.39.246,192.168.39.241
	I0920 18:00:48.679175  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetIP
	I0920 18:00:48.682184  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:48.682668  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:48.682700  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:48.682899  256536 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:00:48.687203  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:00:48.700132  256536 mustload.go:65] Loading cluster: ha-347193
	I0920 18:00:48.700432  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:00:48.700738  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:00:48.700780  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:00:48.718208  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I0920 18:00:48.718740  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:00:48.719373  256536 main.go:141] libmachine: Using API Version  1
	I0920 18:00:48.719397  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:00:48.719797  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:00:48.720025  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 18:00:48.722026  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 18:00:48.722319  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:00:48.722366  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:00:48.738476  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42131
	I0920 18:00:48.739047  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:00:48.739709  256536 main.go:141] libmachine: Using API Version  1
	I0920 18:00:48.739737  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:00:48.740150  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:00:48.740408  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:00:48.740641  256536 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193 for IP: 192.168.39.250
	I0920 18:00:48.740657  256536 certs.go:194] generating shared ca certs ...
	I0920 18:00:48.740678  256536 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:00:48.740861  256536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 18:00:48.740924  256536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 18:00:48.740938  256536 certs.go:256] generating profile certs ...
	I0920 18:00:48.741049  256536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key
	I0920 18:00:48.741086  256536 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.071b5fb5
	I0920 18:00:48.741106  256536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.071b5fb5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.241 192.168.39.250 192.168.39.254]
	I0920 18:00:48.849787  256536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.071b5fb5 ...
	I0920 18:00:48.849825  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.071b5fb5: {Name:mk94b8924122fda4caf4db9161420b6f420a2437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:00:48.850030  256536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.071b5fb5 ...
	I0920 18:00:48.850042  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.071b5fb5: {Name:mk6d1c5532994e70c91ba359922d7d11837270cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:00:48.850120  256536 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.071b5fb5 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt
	I0920 18:00:48.850256  256536 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.071b5fb5 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key
	I0920 18:00:48.850383  256536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key
	I0920 18:00:48.850401  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:00:48.850413  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:00:48.850425  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:00:48.850434  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:00:48.850447  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:00:48.850458  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:00:48.850472  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:00:48.866055  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:00:48.866157  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 18:00:48.866197  256536 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 18:00:48.866207  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:00:48.866228  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:00:48.866250  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:00:48.866268  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 18:00:48.866305  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:00:48.866332  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:00:48.866346  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem -> /usr/share/ca-certificates/244849.pem
	I0920 18:00:48.866361  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /usr/share/ca-certificates/2448492.pem
	I0920 18:00:48.866398  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:00:48.869320  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:00:48.869797  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:00:48.869831  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:00:48.870003  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:00:48.870250  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:00:48.870392  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:00:48.870532  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 18:00:48.946355  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 18:00:48.951957  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 18:00:48.963708  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 18:00:48.968268  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0920 18:00:48.979656  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 18:00:48.983832  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 18:00:48.995975  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 18:00:48.999924  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0920 18:00:49.010455  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 18:00:49.014784  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 18:00:49.025741  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 18:00:49.030881  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0920 18:00:49.042858  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:00:49.071216  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:00:49.096135  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:00:49.120994  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:00:49.146256  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0920 18:00:49.170936  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:00:49.195738  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:00:49.219660  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:00:49.243873  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:00:49.268501  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 18:00:49.293119  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 18:00:49.317663  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 18:00:49.336046  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0920 18:00:49.352794  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 18:00:49.370728  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0920 18:00:49.388727  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 18:00:49.406268  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0920 18:00:49.422685  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 18:00:49.439002  256536 ssh_runner.go:195] Run: openssl version
	I0920 18:00:49.444882  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:00:49.456482  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:00:49.461403  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:00:49.461480  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:00:49.470070  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:00:49.481997  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 18:00:49.496420  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 18:00:49.501453  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:00:49.501530  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 18:00:49.508441  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 18:00:49.521740  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 18:00:49.535641  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 18:00:49.541368  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:00:49.541431  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 18:00:49.547775  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:00:49.559535  256536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:00:49.563545  256536 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:00:49.563612  256536 kubeadm.go:934] updating node {m03 192.168.39.250 8443 v1.31.1 crio true true} ...
	I0920 18:00:49.563727  256536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-347193-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:00:49.563772  256536 kube-vip.go:115] generating kube-vip config ...
	I0920 18:00:49.563822  256536 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:00:49.580897  256536 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:00:49.580978  256536 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:00:49.581038  256536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:00:49.590566  256536 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 18:00:49.590695  256536 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 18:00:49.600047  256536 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 18:00:49.600048  256536 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 18:00:49.600092  256536 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 18:00:49.600085  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:00:49.600108  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:00:49.600145  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:00:49.600623  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:00:49.600694  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:00:49.606126  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 18:00:49.606169  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 18:00:49.632538  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:00:49.632673  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:00:49.632669  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 18:00:49.632772  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 18:00:49.675110  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 18:00:49.675165  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 18:00:50.517293  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 18:00:50.527931  256536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 18:00:50.545163  256536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:00:50.562804  256536 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:00:50.579873  256536 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:00:50.583899  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:00:50.595871  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:00:50.727492  256536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:00:50.746998  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 18:00:50.747552  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:00:50.747621  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:00:50.764998  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35973
	I0920 18:00:50.765568  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:00:50.766259  256536 main.go:141] libmachine: Using API Version  1
	I0920 18:00:50.766285  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:00:50.766697  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:00:50.766924  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:00:50.767151  256536 start.go:317] joinCluster: &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:00:50.767302  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 18:00:50.767319  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:00:50.770123  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:00:50.770554  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:00:50.770590  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:00:50.770696  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:00:50.770948  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:00:50.771120  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:00:50.771276  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 18:00:50.937328  256536 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:00:50.937401  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token w3dh0u.en8aqh39le5u0uln --discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-347193-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443"
	I0920 18:01:13.927196  256536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token w3dh0u.en8aqh39le5u0uln --discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-347193-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443": (22.989760407s)
	I0920 18:01:13.927243  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 18:01:14.543516  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-347193-m03 minikube.k8s.io/updated_at=2024_09_20T18_01_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=ha-347193 minikube.k8s.io/primary=false
	I0920 18:01:14.679099  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-347193-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 18:01:14.820428  256536 start.go:319] duration metric: took 24.053268109s to joinCluster
	I0920 18:01:14.820517  256536 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:01:14.820875  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:01:14.822533  256536 out.go:177] * Verifying Kubernetes components...
	I0920 18:01:14.823874  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:01:15.125787  256536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:01:15.183134  256536 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:01:15.183424  256536 kapi.go:59] client config for ha-347193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key", CAFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 18:01:15.183503  256536 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I0920 18:01:15.183888  256536 node_ready.go:35] waiting up to 6m0s for node "ha-347193-m03" to be "Ready" ...
	I0920 18:01:15.184021  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:15.184034  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:15.184045  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:15.184057  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:15.188812  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:15.684732  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:15.684762  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:15.684773  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:15.684779  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:15.688455  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:16.184249  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:16.184278  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:16.184290  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:16.184296  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:16.188149  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:16.684238  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:16.684266  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:16.684276  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:16.684280  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:16.688135  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:17.184574  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:17.184605  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:17.184616  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:17.184622  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:17.188720  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:17.189742  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:17.684157  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:17.684188  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:17.684200  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:17.684205  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:17.687993  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:18.184987  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:18.185016  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:18.185027  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:18.185033  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:18.188436  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:18.684240  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:18.684263  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:18.684270  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:18.684274  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:18.688063  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:19.184814  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:19.184846  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:19.184859  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:19.184868  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:19.189842  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:19.190448  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:19.684861  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:19.684890  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:19.684901  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:19.684908  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:19.688056  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:20.184157  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:20.184183  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:20.184192  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:20.184196  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:20.190785  256536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 18:01:20.684195  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:20.684230  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:20.684241  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:20.684245  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:20.688027  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:21.185183  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:21.185207  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:21.185216  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:21.185221  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:21.188774  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:21.684314  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:21.684338  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:21.684350  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:21.684355  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:21.687635  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:21.688202  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:22.185048  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:22.185073  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:22.185084  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:22.185089  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:22.188754  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:22.684520  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:22.684570  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:22.684579  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:22.684584  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:22.688376  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:23.184575  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:23.184600  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:23.184608  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:23.184612  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:23.189052  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:23.684932  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:23.684955  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:23.684965  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:23.684968  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:23.688597  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:23.689108  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:24.184308  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:24.184334  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:24.184344  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:24.184350  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:24.188092  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:24.684218  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:24.684252  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:24.684261  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:24.684264  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:24.688018  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:25.184193  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:25.184221  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:25.184232  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:25.184237  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:25.188243  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:25.684786  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:25.684818  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:25.684830  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:25.684837  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:25.687395  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:01:26.184220  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:26.184255  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:26.184270  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:26.184273  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:26.188544  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:26.189181  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:26.684404  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:26.684432  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:26.684445  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:26.684452  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:26.688821  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:27.184155  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:27.184182  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:27.184191  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:27.184194  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:27.187676  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:27.684611  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:27.684643  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:27.684651  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:27.684654  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:27.688751  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:28.184312  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:28.184339  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:28.184347  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:28.184350  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:28.188272  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:28.684161  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:28.684200  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:28.684208  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:28.684212  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:28.687898  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:28.688502  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:29.184527  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:29.184554  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:29.184563  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:29.184570  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:29.188227  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:29.685118  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:29.685147  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:29.685157  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:29.685159  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:29.689095  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:30.184672  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:30.184697  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:30.184705  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:30.184709  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:30.188058  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:30.685162  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:30.685189  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:30.685200  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:30.685206  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:30.688686  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:30.689119  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:31.184362  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:31.184388  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:31.184397  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:31.184401  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:31.188508  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:31.684348  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:31.684374  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:31.684382  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:31.684388  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:31.688113  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:32.184592  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:32.184620  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.184629  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.184633  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.188695  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:32.684894  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:32.684920  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.684929  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.684933  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.688521  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:32.689073  256536 node_ready.go:49] node "ha-347193-m03" has status "Ready":"True"
	I0920 18:01:32.689098  256536 node_ready.go:38] duration metric: took 17.505173835s for node "ha-347193-m03" to be "Ready" ...
	I0920 18:01:32.689108  256536 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:01:32.689179  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:01:32.689189  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.689196  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.689200  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.713301  256536 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0920 18:01:32.721489  256536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.721627  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6llmd
	I0920 18:01:32.721638  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.721649  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.721660  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.731687  256536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0920 18:01:32.732373  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:32.732393  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.732404  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.732410  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.740976  256536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 18:01:32.741470  256536 pod_ready.go:93] pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:32.741487  256536 pod_ready.go:82] duration metric: took 19.962818ms for pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.741496  256536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.741558  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bkmhn
	I0920 18:01:32.741564  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.741572  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.741578  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.754720  256536 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0920 18:01:32.755448  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:32.755463  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.755471  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.755475  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.764627  256536 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0920 18:01:32.765312  256536 pod_ready.go:93] pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:32.765342  256536 pod_ready.go:82] duration metric: took 23.838489ms for pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.765357  256536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.765462  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193
	I0920 18:01:32.765474  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.765484  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.765492  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.774103  256536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 18:01:32.774830  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:32.774850  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.774858  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.774861  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.777561  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:01:32.778082  256536 pod_ready.go:93] pod "etcd-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:32.778110  256536 pod_ready.go:82] duration metric: took 12.744363ms for pod "etcd-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.778122  256536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.778202  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193-m02
	I0920 18:01:32.778213  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.778225  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.778234  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.781035  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:01:32.781896  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:32.781933  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.781945  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.781950  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.784612  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:01:32.785026  256536 pod_ready.go:93] pod "etcd-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:32.785044  256536 pod_ready.go:82] duration metric: took 6.912479ms for pod "etcd-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.785057  256536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.885398  256536 request.go:632] Waited for 100.268978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193-m03
	I0920 18:01:32.885496  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193-m03
	I0920 18:01:32.885505  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.885513  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.885520  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.889795  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:33.084880  256536 request.go:632] Waited for 194.30681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:33.084946  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:33.084952  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:33.084960  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:33.084964  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:33.088321  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:33.088961  256536 pod_ready.go:93] pod "etcd-ha-347193-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:33.088982  256536 pod_ready.go:82] duration metric: took 303.916513ms for pod "etcd-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:33.089001  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:33.285463  256536 request.go:632] Waited for 196.366216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193
	I0920 18:01:33.285538  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193
	I0920 18:01:33.285544  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:33.285553  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:33.285557  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:33.289153  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:33.485283  256536 request.go:632] Waited for 195.396109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:33.485343  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:33.485349  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:33.485363  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:33.485368  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:33.488640  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:33.489171  256536 pod_ready.go:93] pod "kube-apiserver-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:33.489194  256536 pod_ready.go:82] duration metric: took 400.186326ms for pod "kube-apiserver-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:33.489203  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:33.685381  256536 request.go:632] Waited for 196.09905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m02
	I0920 18:01:33.685495  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m02
	I0920 18:01:33.685509  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:33.685526  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:33.685534  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:33.689644  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:33.885477  256536 request.go:632] Waited for 194.996096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:33.885557  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:33.885565  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:33.885575  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:33.885584  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:33.888804  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:33.889531  256536 pod_ready.go:93] pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:33.889552  256536 pod_ready.go:82] duration metric: took 400.342117ms for pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:33.889562  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:34.085670  256536 request.go:632] Waited for 196.018178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m03
	I0920 18:01:34.085746  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m03
	I0920 18:01:34.085754  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:34.085766  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:34.085774  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:34.089521  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:34.285667  256536 request.go:632] Waited for 195.397565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:34.285731  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:34.285736  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:34.285744  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:34.285747  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:34.289576  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:34.290194  256536 pod_ready.go:93] pod "kube-apiserver-ha-347193-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:34.290225  256536 pod_ready.go:82] duration metric: took 400.654429ms for pod "kube-apiserver-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:34.290241  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:34.485359  256536 request.go:632] Waited for 195.022891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193
	I0920 18:01:34.485429  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193
	I0920 18:01:34.485446  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:34.485459  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:34.485466  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:34.489143  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:34.685371  256536 request.go:632] Waited for 195.396623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:34.685455  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:34.685461  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:34.685471  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:34.685477  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:34.688902  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:34.689635  256536 pod_ready.go:93] pod "kube-controller-manager-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:34.689658  256536 pod_ready.go:82] duration metric: took 399.407979ms for pod "kube-controller-manager-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:34.689671  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:34.885295  256536 request.go:632] Waited for 195.53866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m02
	I0920 18:01:34.885360  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m02
	I0920 18:01:34.885365  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:34.885373  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:34.885377  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:34.888992  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.085267  256536 request.go:632] Waited for 195.362009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:35.085328  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:35.085334  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:35.085345  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:35.085356  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:35.088980  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.090052  256536 pod_ready.go:93] pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:35.090080  256536 pod_ready.go:82] duration metric: took 400.399772ms for pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:35.090093  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:35.285052  256536 request.go:632] Waited for 194.845569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m03
	I0920 18:01:35.285131  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m03
	I0920 18:01:35.285140  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:35.285150  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:35.285160  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:35.288701  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.484934  256536 request.go:632] Waited for 195.307179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:35.485011  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:35.485016  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:35.485024  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:35.485033  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:35.488224  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.488823  256536 pod_ready.go:93] pod "kube-controller-manager-ha-347193-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:35.488842  256536 pod_ready.go:82] duration metric: took 398.741341ms for pod "kube-controller-manager-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:35.488859  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ffdvq" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:35.684978  256536 request.go:632] Waited for 196.047954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ffdvq
	I0920 18:01:35.685045  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ffdvq
	I0920 18:01:35.685051  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:35.685059  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:35.685063  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:35.689004  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.885928  256536 request.go:632] Waited for 196.269085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:35.886004  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:35.886014  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:35.886025  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:35.886035  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:35.889926  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.890483  256536 pod_ready.go:93] pod "kube-proxy-ffdvq" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:35.890511  256536 pod_ready.go:82] duration metric: took 401.643812ms for pod "kube-proxy-ffdvq" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:35.890526  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pccxp" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:36.085261  256536 request.go:632] Waited for 194.62795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pccxp
	I0920 18:01:36.085385  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pccxp
	I0920 18:01:36.085393  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:36.085402  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:36.085408  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:36.089652  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:36.285734  256536 request.go:632] Waited for 195.416978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:36.285799  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:36.285804  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:36.285812  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:36.285816  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:36.289287  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:36.289898  256536 pod_ready.go:93] pod "kube-proxy-pccxp" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:36.289950  256536 pod_ready.go:82] duration metric: took 399.411009ms for pod "kube-proxy-pccxp" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:36.289967  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rdqkg" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:36.484907  256536 request.go:632] Waited for 194.838014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdqkg
	I0920 18:01:36.485002  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdqkg
	I0920 18:01:36.485015  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:36.485026  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:36.485035  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:36.488569  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:36.685854  256536 request.go:632] Waited for 196.449208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:36.685961  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:36.685971  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:36.685979  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:36.685982  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:36.690267  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:36.691030  256536 pod_ready.go:93] pod "kube-proxy-rdqkg" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:36.691060  256536 pod_ready.go:82] duration metric: took 401.083761ms for pod "kube-proxy-rdqkg" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:36.691073  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:36.884877  256536 request.go:632] Waited for 193.713134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193
	I0920 18:01:36.884990  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193
	I0920 18:01:36.885002  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:36.885014  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:36.885023  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:36.888846  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:37.086004  256536 request.go:632] Waited for 196.564771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:37.086085  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:37.086094  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.086106  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.086115  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.090524  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:37.091265  256536 pod_ready.go:93] pod "kube-scheduler-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:37.091290  256536 pod_ready.go:82] duration metric: took 400.207966ms for pod "kube-scheduler-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:37.091300  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:37.285288  256536 request.go:632] Waited for 193.886376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m02
	I0920 18:01:37.285368  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m02
	I0920 18:01:37.285376  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.285388  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.285396  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.288742  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:37.485296  256536 request.go:632] Waited for 196.041594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:37.485365  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:37.485370  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.485379  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.485382  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.488438  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:37.488873  256536 pod_ready.go:93] pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:37.488894  256536 pod_ready.go:82] duration metric: took 397.585949ms for pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:37.488904  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:37.684947  256536 request.go:632] Waited for 195.929511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m03
	I0920 18:01:37.685019  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m03
	I0920 18:01:37.685027  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.685037  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.685042  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.688698  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:37.885884  256536 request.go:632] Waited for 196.412935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:37.885988  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:37.885998  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.886006  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.886010  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.889509  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:37.890123  256536 pod_ready.go:93] pod "kube-scheduler-ha-347193-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:37.890146  256536 pod_ready.go:82] duration metric: took 401.23569ms for pod "kube-scheduler-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:37.890158  256536 pod_ready.go:39] duration metric: took 5.201039475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:01:37.890178  256536 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:01:37.890240  256536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:01:37.905594  256536 api_server.go:72] duration metric: took 23.085026432s to wait for apiserver process to appear ...
	I0920 18:01:37.905621  256536 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:01:37.905659  256536 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I0920 18:01:37.910576  256536 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I0920 18:01:37.910667  256536 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I0920 18:01:37.910679  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.910691  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.910701  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.911708  256536 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0920 18:01:37.911795  256536 api_server.go:141] control plane version: v1.31.1
	I0920 18:01:37.911813  256536 api_server.go:131] duration metric: took 6.185417ms to wait for apiserver health ...
	I0920 18:01:37.911822  256536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:01:38.085341  256536 request.go:632] Waited for 173.386572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:01:38.085419  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:01:38.085431  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:38.085456  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:38.085465  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:38.091784  256536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 18:01:38.097649  256536 system_pods.go:59] 24 kube-system pods found
	I0920 18:01:38.097681  256536 system_pods.go:61] "coredns-7c65d6cfc9-6llmd" [8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92] Running
	I0920 18:01:38.097686  256536 system_pods.go:61] "coredns-7c65d6cfc9-bkmhn" [f7862a6e-54cc-450c-b283-d20fb99f51ce] Running
	I0920 18:01:38.097691  256536 system_pods.go:61] "etcd-ha-347193" [e13fc198-b02b-4f0a-bf76-be0f519d9d57] Running
	I0920 18:01:38.097695  256536 system_pods.go:61] "etcd-ha-347193-m02" [4ea69953-b35a-4ae9-8153-cea3be5e2c1c] Running
	I0920 18:01:38.097698  256536 system_pods.go:61] "etcd-ha-347193-m03" [e83dd2f3-86bc-466d-9913-390f756db956] Running
	I0920 18:01:38.097701  256536 system_pods.go:61] "kindnet-5msnk" [af184b84-65ce-4ba0-879e-87ec81029f7e] Running
	I0920 18:01:38.097705  256536 system_pods.go:61] "kindnet-cqbxl" [3d49a6b1-5be5-4d96-98e3-bd05035a2d1b] Running
	I0920 18:01:38.097708  256536 system_pods.go:61] "kindnet-z24zp" [9271d251-2d95-4b23-85f3-7da6567b2fc3] Running
	I0920 18:01:38.097711  256536 system_pods.go:61] "kube-apiserver-ha-347193" [993ccf05-a39a-42b4-b82d-936531325dc4] Running
	I0920 18:01:38.097714  256536 system_pods.go:61] "kube-apiserver-ha-347193-m02" [43cd77b8-8925-4a04-a8cf-1b9a0cbbc502] Running
	I0920 18:01:38.097718  256536 system_pods.go:61] "kube-apiserver-ha-347193-m03" [02b7bcea-c245-4b1e-9be5-e815d4aceb74] Running
	I0920 18:01:38.097721  256536 system_pods.go:61] "kube-controller-manager-ha-347193" [6de3a14b-6587-45d4-aaee-1256b9c327cc] Running
	I0920 18:01:38.097724  256536 system_pods.go:61] "kube-controller-manager-ha-347193-m02" [cdf3f4d7-0675-4c59-8ad5-8901104d71c3] Running
	I0920 18:01:38.097727  256536 system_pods.go:61] "kube-controller-manager-ha-347193-m03" [3a4a0044-50e7-475a-9be9-76edda1c27ab] Running
	I0920 18:01:38.097729  256536 system_pods.go:61] "kube-proxy-ffdvq" [97120f62-0af2-405a-b8ff-639c72a39a2d] Running
	I0920 18:01:38.097732  256536 system_pods.go:61] "kube-proxy-pccxp" [3a4882b7-f59f-47d4-b2dc-d5b7f8f0d2c7] Running
	I0920 18:01:38.097735  256536 system_pods.go:61] "kube-proxy-rdqkg" [d9ae4e37-b29b-400a-af2d-544da4024069] Running
	I0920 18:01:38.097738  256536 system_pods.go:61] "kube-scheduler-ha-347193" [910baa0e-404e-4ac7-9262-848672eaf9cf] Running
	I0920 18:01:38.097743  256536 system_pods.go:61] "kube-scheduler-ha-347193-m02" [623b9c3b-b998-4516-a53e-17e9d8970594] Running
	I0920 18:01:38.097749  256536 system_pods.go:61] "kube-scheduler-ha-347193-m03" [cd08009b-7b3e-4c73-a2a0-824d43a19c0e] Running
	I0920 18:01:38.097751  256536 system_pods.go:61] "kube-vip-ha-347193" [20d6faa4-600f-4bd0-8acb-1f95c047da58] Running
	I0920 18:01:38.097754  256536 system_pods.go:61] "kube-vip-ha-347193-m02" [1455826c-7b3d-40f7-bb15-a9861ee95e19] Running
	I0920 18:01:38.097757  256536 system_pods.go:61] "kube-vip-ha-347193-m03" [d6b869ce-4510-400c-b8e9-6e3bec9718e4] Running
	I0920 18:01:38.097759  256536 system_pods.go:61] "storage-provisioner" [8924f7ce-85a0-4587-9c05-8a74c7113e9e] Running
	I0920 18:01:38.097766  256536 system_pods.go:74] duration metric: took 185.936377ms to wait for pod list to return data ...
	I0920 18:01:38.097773  256536 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:01:38.285212  256536 request.go:632] Waited for 187.355991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:01:38.285280  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:01:38.285285  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:38.285293  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:38.285298  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:38.290019  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:38.290139  256536 default_sa.go:45] found service account: "default"
	I0920 18:01:38.290156  256536 default_sa.go:55] duration metric: took 192.375892ms for default service account to be created ...
	I0920 18:01:38.290165  256536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:01:38.485546  256536 request.go:632] Waited for 195.287049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:01:38.485611  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:01:38.485616  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:38.485641  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:38.485645  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:38.491609  256536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:01:38.498558  256536 system_pods.go:86] 24 kube-system pods found
	I0920 18:01:38.498588  256536 system_pods.go:89] "coredns-7c65d6cfc9-6llmd" [8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92] Running
	I0920 18:01:38.498594  256536 system_pods.go:89] "coredns-7c65d6cfc9-bkmhn" [f7862a6e-54cc-450c-b283-d20fb99f51ce] Running
	I0920 18:01:38.498598  256536 system_pods.go:89] "etcd-ha-347193" [e13fc198-b02b-4f0a-bf76-be0f519d9d57] Running
	I0920 18:01:38.498602  256536 system_pods.go:89] "etcd-ha-347193-m02" [4ea69953-b35a-4ae9-8153-cea3be5e2c1c] Running
	I0920 18:01:38.498606  256536 system_pods.go:89] "etcd-ha-347193-m03" [e83dd2f3-86bc-466d-9913-390f756db956] Running
	I0920 18:01:38.498610  256536 system_pods.go:89] "kindnet-5msnk" [af184b84-65ce-4ba0-879e-87ec81029f7e] Running
	I0920 18:01:38.498614  256536 system_pods.go:89] "kindnet-cqbxl" [3d49a6b1-5be5-4d96-98e3-bd05035a2d1b] Running
	I0920 18:01:38.498618  256536 system_pods.go:89] "kindnet-z24zp" [9271d251-2d95-4b23-85f3-7da6567b2fc3] Running
	I0920 18:01:38.498622  256536 system_pods.go:89] "kube-apiserver-ha-347193" [993ccf05-a39a-42b4-b82d-936531325dc4] Running
	I0920 18:01:38.498625  256536 system_pods.go:89] "kube-apiserver-ha-347193-m02" [43cd77b8-8925-4a04-a8cf-1b9a0cbbc502] Running
	I0920 18:01:38.498629  256536 system_pods.go:89] "kube-apiserver-ha-347193-m03" [02b7bcea-c245-4b1e-9be5-e815d4aceb74] Running
	I0920 18:01:38.498634  256536 system_pods.go:89] "kube-controller-manager-ha-347193" [6de3a14b-6587-45d4-aaee-1256b9c327cc] Running
	I0920 18:01:38.498637  256536 system_pods.go:89] "kube-controller-manager-ha-347193-m02" [cdf3f4d7-0675-4c59-8ad5-8901104d71c3] Running
	I0920 18:01:38.498641  256536 system_pods.go:89] "kube-controller-manager-ha-347193-m03" [3a4a0044-50e7-475a-9be9-76edda1c27ab] Running
	I0920 18:01:38.498644  256536 system_pods.go:89] "kube-proxy-ffdvq" [97120f62-0af2-405a-b8ff-639c72a39a2d] Running
	I0920 18:01:38.498647  256536 system_pods.go:89] "kube-proxy-pccxp" [3a4882b7-f59f-47d4-b2dc-d5b7f8f0d2c7] Running
	I0920 18:01:38.498653  256536 system_pods.go:89] "kube-proxy-rdqkg" [d9ae4e37-b29b-400a-af2d-544da4024069] Running
	I0920 18:01:38.498658  256536 system_pods.go:89] "kube-scheduler-ha-347193" [910baa0e-404e-4ac7-9262-848672eaf9cf] Running
	I0920 18:01:38.498662  256536 system_pods.go:89] "kube-scheduler-ha-347193-m02" [623b9c3b-b998-4516-a53e-17e9d8970594] Running
	I0920 18:01:38.498666  256536 system_pods.go:89] "kube-scheduler-ha-347193-m03" [cd08009b-7b3e-4c73-a2a0-824d43a19c0e] Running
	I0920 18:01:38.498669  256536 system_pods.go:89] "kube-vip-ha-347193" [20d6faa4-600f-4bd0-8acb-1f95c047da58] Running
	I0920 18:01:38.498673  256536 system_pods.go:89] "kube-vip-ha-347193-m02" [1455826c-7b3d-40f7-bb15-a9861ee95e19] Running
	I0920 18:01:38.498677  256536 system_pods.go:89] "kube-vip-ha-347193-m03" [d6b869ce-4510-400c-b8e9-6e3bec9718e4] Running
	I0920 18:01:38.498684  256536 system_pods.go:89] "storage-provisioner" [8924f7ce-85a0-4587-9c05-8a74c7113e9e] Running
	I0920 18:01:38.498690  256536 system_pods.go:126] duration metric: took 208.521056ms to wait for k8s-apps to be running ...
	I0920 18:01:38.498697  256536 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:01:38.498743  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:01:38.514029  256536 system_svc.go:56] duration metric: took 15.320471ms WaitForService to wait for kubelet
	I0920 18:01:38.514065  256536 kubeadm.go:582] duration metric: took 23.693509389s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:01:38.514086  256536 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:01:38.685544  256536 request.go:632] Waited for 171.353571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I0920 18:01:38.685619  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I0920 18:01:38.685624  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:38.685632  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:38.685636  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:38.690050  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:38.691008  256536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:01:38.691029  256536 node_conditions.go:123] node cpu capacity is 2
	I0920 18:01:38.691041  256536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:01:38.691045  256536 node_conditions.go:123] node cpu capacity is 2
	I0920 18:01:38.691049  256536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:01:38.691051  256536 node_conditions.go:123] node cpu capacity is 2
	I0920 18:01:38.691055  256536 node_conditions.go:105] duration metric: took 176.963396ms to run NodePressure ...
	I0920 18:01:38.691067  256536 start.go:241] waiting for startup goroutines ...
	I0920 18:01:38.691085  256536 start.go:255] writing updated cluster config ...
	I0920 18:01:38.691394  256536 ssh_runner.go:195] Run: rm -f paused
	I0920 18:01:38.746142  256536 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:01:38.748440  256536 out.go:177] * Done! kubectl is now configured to use "ha-347193" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.632320313Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855525632264189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52aea46d-b138-41e0-b36a-3e8bcfb9bf54 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.632926406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b192f234-6e11-44c3-9e15-9351566ecce4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.632979920Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b192f234-6e11-44c3-9e15-9351566ecce4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.633242887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855304216814938,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f54f7a5f2c32d6bc0ec6ee35174b79a19554688494c5a052f414808aba6d5d3,PodSandboxId:1008d082466619b9dff1a593919ad42edc22d2689cb4c63ade9d89a2aa3d82cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855158873195435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158811750044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158740895692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54
cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172685514
7923720838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855146590131954,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3702c95ae17f31f21a9df60d65fe7c873d5a4a63c4bb0951d83c81da6fdcdcc9,PodSandboxId:79b3c32a6e6c014d62d3cf90229370a249daff625a451df26bc56b63f13b5011,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855137793174604,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40531f7fb6a94d470f366df1ed8127e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4,PodSandboxId:88ee68a7e316b7dd733350aa45479a511371c952904195167a88e9851da02e65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855135226742097,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855135139365214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855135143406129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09,PodSandboxId:a832aed299e3faf778cd7e1ebb68848a5f31d0f1bbd92c129bcc7511f62ef4df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855135082024585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b192f234-6e11-44c3-9e15-9351566ecce4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.670404156Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb7caf71-c2f8-40d2-81f2-dedceaeb8540 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.670501805Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb7caf71-c2f8-40d2-81f2-dedceaeb8540 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.672005543Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23c8a83f-8d77-4ac6-88a6-a1d80e11cd1e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.672657185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855525672626326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23c8a83f-8d77-4ac6-88a6-a1d80e11cd1e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.673228036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d80f23a9-977a-45af-be76-3b6b4b8e66f1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.673340066Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d80f23a9-977a-45af-be76-3b6b4b8e66f1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.673585046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855304216814938,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f54f7a5f2c32d6bc0ec6ee35174b79a19554688494c5a052f414808aba6d5d3,PodSandboxId:1008d082466619b9dff1a593919ad42edc22d2689cb4c63ade9d89a2aa3d82cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855158873195435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158811750044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158740895692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54
cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172685514
7923720838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855146590131954,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3702c95ae17f31f21a9df60d65fe7c873d5a4a63c4bb0951d83c81da6fdcdcc9,PodSandboxId:79b3c32a6e6c014d62d3cf90229370a249daff625a451df26bc56b63f13b5011,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855137793174604,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40531f7fb6a94d470f366df1ed8127e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4,PodSandboxId:88ee68a7e316b7dd733350aa45479a511371c952904195167a88e9851da02e65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855135226742097,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855135139365214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855135143406129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09,PodSandboxId:a832aed299e3faf778cd7e1ebb68848a5f31d0f1bbd92c129bcc7511f62ef4df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855135082024585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d80f23a9-977a-45af-be76-3b6b4b8e66f1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.713328156Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8743b4a3-88ef-4b2c-91ff-c6ed98a85e3b name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.713411569Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8743b4a3-88ef-4b2c-91ff-c6ed98a85e3b name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.714644148Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=770f37c9-6c06-4bdf-9d8b-e9ccedc20313 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.715089030Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855525715064928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=770f37c9-6c06-4bdf-9d8b-e9ccedc20313 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.715585922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfba97d2-db91-4abf-940c-74ff0f33b438 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.715663092Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfba97d2-db91-4abf-940c-74ff0f33b438 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.715910886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855304216814938,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f54f7a5f2c32d6bc0ec6ee35174b79a19554688494c5a052f414808aba6d5d3,PodSandboxId:1008d082466619b9dff1a593919ad42edc22d2689cb4c63ade9d89a2aa3d82cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855158873195435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158811750044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158740895692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54
cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172685514
7923720838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855146590131954,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3702c95ae17f31f21a9df60d65fe7c873d5a4a63c4bb0951d83c81da6fdcdcc9,PodSandboxId:79b3c32a6e6c014d62d3cf90229370a249daff625a451df26bc56b63f13b5011,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855137793174604,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40531f7fb6a94d470f366df1ed8127e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4,PodSandboxId:88ee68a7e316b7dd733350aa45479a511371c952904195167a88e9851da02e65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855135226742097,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855135139365214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855135143406129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09,PodSandboxId:a832aed299e3faf778cd7e1ebb68848a5f31d0f1bbd92c129bcc7511f62ef4df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855135082024585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfba97d2-db91-4abf-940c-74ff0f33b438 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.759950181Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45fd243b-e5d2-4936-8144-7e9f9c18374b name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.760045020Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45fd243b-e5d2-4936-8144-7e9f9c18374b name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.761405157Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=425f2f43-d1be-4198-87ac-5e46fc82c3f2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.761846335Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855525761822621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=425f2f43-d1be-4198-87ac-5e46fc82c3f2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.762527072Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=647a7aab-82ab-46db-8f9c-b353b4614bca name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.762601835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=647a7aab-82ab-46db-8f9c-b353b4614bca name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:25 ha-347193 crio[669]: time="2024-09-20 18:05:25.762838967Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855304216814938,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f54f7a5f2c32d6bc0ec6ee35174b79a19554688494c5a052f414808aba6d5d3,PodSandboxId:1008d082466619b9dff1a593919ad42edc22d2689cb4c63ade9d89a2aa3d82cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855158873195435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158811750044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158740895692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54
cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172685514
7923720838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855146590131954,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3702c95ae17f31f21a9df60d65fe7c873d5a4a63c4bb0951d83c81da6fdcdcc9,PodSandboxId:79b3c32a6e6c014d62d3cf90229370a249daff625a451df26bc56b63f13b5011,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855137793174604,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40531f7fb6a94d470f366df1ed8127e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4,PodSandboxId:88ee68a7e316b7dd733350aa45479a511371c952904195167a88e9851da02e65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855135226742097,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855135139365214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855135143406129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09,PodSandboxId:a832aed299e3faf778cd7e1ebb68848a5f31d0f1bbd92c129bcc7511f62ef4df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855135082024585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=647a7aab-82ab-46db-8f9c-b353b4614bca name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	24d13f339c817       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   d56c4fb5022a4       busybox-7dff88458-vv8nw
	6f54f7a5f2c32       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   1008d08246661       storage-provisioner
	998d6fb086954       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   cfb097797b519       coredns-7c65d6cfc9-6llmd
	4980eee34ad3b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   503157b6402f3       coredns-7c65d6cfc9-bkmhn
	54d750519756c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   d420593f085b4       kube-proxy-rdqkg
	ebfa9fcdc2495       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   db50f6f39d94c       kindnet-z24zp
	3702c95ae17f3       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   79b3c32a6e6c0       kube-vip-ha-347193
	dce6ebcdcfa25       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   88ee68a7e316b       kube-apiserver-ha-347193
	b9e6f76c6e332       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   3b399285f0a3e       etcd-ha-347193
	6cae0975e4bde       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   6700b91af83d5       kube-scheduler-ha-347193
	5db95e41c4eee       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   a832aed299e3f       kube-controller-manager-ha-347193
	
	
	==> coredns [4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01] <==
	[INFO] 10.244.1.2:54565 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.005838401s
	[INFO] 10.244.2.2:51366 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000199485s
	[INFO] 10.244.0.4:36108 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000120747s
	[INFO] 10.244.0.4:52405 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000754716s
	[INFO] 10.244.0.4:39912 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001939354s
	[INFO] 10.244.1.2:35811 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004936568s
	[INFO] 10.244.1.2:36016 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003046132s
	[INFO] 10.244.1.2:34653 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170016s
	[INFO] 10.244.1.2:59470 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145491s
	[INFO] 10.244.2.2:50581 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001424335s
	[INFO] 10.244.2.2:53657 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087743s
	[INFO] 10.244.0.4:45468 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002017081s
	[INFO] 10.244.0.4:50151 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148946s
	[INFO] 10.244.0.4:51594 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101915s
	[INFO] 10.244.0.4:54414 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114937s
	[INFO] 10.244.1.2:38701 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218522s
	[INFO] 10.244.1.2:41853 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000182128s
	[INFO] 10.244.2.2:48909 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169464s
	[INFO] 10.244.0.4:55409 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111385s
	[INFO] 10.244.1.2:58822 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137575s
	[INFO] 10.244.2.2:55178 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124535s
	[INFO] 10.244.2.2:44350 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150664s
	[INFO] 10.244.0.4:57962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114195s
	[INFO] 10.244.0.4:56551 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094805s
	[INFO] 10.244.0.4:45171 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054433s
	
	
	==> coredns [998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5] <==
	[INFO] 10.244.1.2:55559 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000283442s
	[INFO] 10.244.2.2:33784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188859s
	[INFO] 10.244.2.2:58215 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00186989s
	[INFO] 10.244.2.2:52774 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099748s
	[INFO] 10.244.2.2:38149 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001158s
	[INFO] 10.244.2.2:42221 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000113646s
	[INFO] 10.244.2.2:49599 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173465s
	[INFO] 10.244.0.4:60750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180138s
	[INFO] 10.244.0.4:46666 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171665s
	[INFO] 10.244.0.4:52002 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001444571s
	[INFO] 10.244.0.4:45151 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006024s
	[INFO] 10.244.1.2:34989 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195829s
	[INFO] 10.244.1.2:34116 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087145s
	[INFO] 10.244.2.2:41553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124108s
	[INFO] 10.244.2.2:35637 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116822s
	[INFO] 10.244.2.2:34355 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111835s
	[INFO] 10.244.0.4:48848 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165085s
	[INFO] 10.244.0.4:49930 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082351s
	[INFO] 10.244.0.4:35945 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077731s
	[INFO] 10.244.1.2:37666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145796s
	[INFO] 10.244.1.2:50941 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000259758s
	[INFO] 10.244.1.2:52591 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141872s
	[INFO] 10.244.2.2:39683 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141964s
	[INFO] 10.244.2.2:51672 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176831s
	[INFO] 10.244.0.4:58285 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000193464s
	
	
	==> describe nodes <==
	Name:               ha-347193
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_59_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:59:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:05:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:02:04 +0000   Fri, 20 Sep 2024 17:59:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:02:04 +0000   Fri, 20 Sep 2024 17:59:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:02:04 +0000   Fri, 20 Sep 2024 17:59:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:02:04 +0000   Fri, 20 Sep 2024 17:59:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-347193
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 24c3d61093c44fc4b2898b98b4bdbc70
	  System UUID:                24c3d610-93c4-4fc4-b289-8b98b4bdbc70
	  Boot ID:                    5638bfe2-e986-4137-9385-e18b7e4b519b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vv8nw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 coredns-7c65d6cfc9-6llmd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m20s
	  kube-system                 coredns-7c65d6cfc9-bkmhn             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m20s
	  kube-system                 etcd-ha-347193                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m25s
	  kube-system                 kindnet-z24zp                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m21s
	  kube-system                 kube-apiserver-ha-347193             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-controller-manager-ha-347193    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-proxy-rdqkg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-scheduler-ha-347193             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-vip-ha-347193                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m17s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m32s (x7 over 6m32s)  kubelet          Node ha-347193 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m32s (x8 over 6m32s)  kubelet          Node ha-347193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m32s (x8 over 6m32s)  kubelet          Node ha-347193 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m25s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m25s                  kubelet          Node ha-347193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s                  kubelet          Node ha-347193 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m25s                  kubelet          Node ha-347193 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m21s                  node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Normal  NodeReady                6m8s                   kubelet          Node ha-347193 status is now: NodeReady
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	
	
	Name:               ha-347193-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_59_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:59:56 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:02:50 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 18:01:58 +0000   Fri, 20 Sep 2024 18:03:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 18:01:58 +0000   Fri, 20 Sep 2024 18:03:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 18:01:58 +0000   Fri, 20 Sep 2024 18:03:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 18:01:58 +0000   Fri, 20 Sep 2024 18:03:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-347193-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 325a97217aeb4c8f9cb24edad597fd25
	  System UUID:                325a9721-7aeb-4c8f-9cb2-4edad597fd25
	  Boot ID:                    bc33abb6-f61b-42e2-af43-631d2ede4061
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-85fk6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-ha-347193-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m28s
	  kube-system                 kindnet-cqbxl                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m30s
	  kube-system                 kube-apiserver-ha-347193-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-controller-manager-ha-347193-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-proxy-ffdvq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-scheduler-ha-347193-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-vip-ha-347193-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m30s (x8 over 5m30s)  kubelet          Node ha-347193-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s (x8 over 5m30s)  kubelet          Node ha-347193-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s (x7 over 5m30s)  kubelet          Node ha-347193-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m26s                  node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  NodeNotReady             116s                   node-controller  Node ha-347193-m02 status is now: NodeNotReady
	
	
	Name:               ha-347193-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_01_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:01:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:05:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:02:13 +0000   Fri, 20 Sep 2024 18:01:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:02:13 +0000   Fri, 20 Sep 2024 18:01:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:02:13 +0000   Fri, 20 Sep 2024 18:01:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:02:13 +0000   Fri, 20 Sep 2024 18:01:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-347193-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 987815694814485e84522bfda359ab42
	  System UUID:                98781569-4814-485e-8452-2bfda359ab42
	  Boot ID:                    fc58e56d-3ed2-412a-b9e5-cb7d5fb81d74
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p824h                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-ha-347193-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m13s
	  kube-system                 kindnet-5msnk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m15s
	  kube-system                 kube-apiserver-ha-347193-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-controller-manager-ha-347193-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-proxy-pccxp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-scheduler-ha-347193-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-vip-ha-347193-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m15s (x8 over 4m15s)  kubelet          Node ha-347193-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s (x8 over 4m15s)  kubelet          Node ha-347193-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s (x7 over 4m15s)  kubelet          Node ha-347193-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-347193-m03 event: Registered Node ha-347193-m03 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-347193-m03 event: Registered Node ha-347193-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-347193-m03 event: Registered Node ha-347193-m03 in Controller
	
	
	Name:               ha-347193-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_02_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:02:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:05:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:02:49 +0000   Fri, 20 Sep 2024 18:02:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:02:49 +0000   Fri, 20 Sep 2024 18:02:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:02:49 +0000   Fri, 20 Sep 2024 18:02:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:02:49 +0000   Fri, 20 Sep 2024 18:02:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    ha-347193-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 36beb0176a7e4c449ee02f4adaf970e8
	  System UUID:                36beb017-6a7e-4c44-9ee0-2f4adaf970e8
	  Boot ID:                    347456dd-4ba6-4d92-bdee-958017f6c085
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-t5f94       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m8s
	  kube-system                 kube-proxy-gtwzd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m2s                 kube-proxy       
	  Normal  CIDRAssignmentFailed     3m8s                 cidrAllocator    Node ha-347193-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m8s (x2 over 3m8s)  kubelet          Node ha-347193-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x2 over 3m8s)  kubelet          Node ha-347193-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x2 over 3m8s)  kubelet          Node ha-347193-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m7s                 node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal  RegisteredNode           3m7s                 node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal  NodeReady                2m48s                kubelet          Node ha-347193-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep20 17:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051116] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037930] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.768779] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.874615] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.547112] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.314105] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.055929] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059483] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.173430] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.132192] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.252987] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.876503] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +5.009721] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.059883] kauditd_printk_skb: 158 callbacks suppressed
	[Sep20 17:59] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.095619] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.048443] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.211237] kauditd_printk_skb: 38 callbacks suppressed
	[Sep20 18:00] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d] <==
	{"level":"warn","ts":"2024-09-20T18:05:26.008710Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.022482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.029429Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.033237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.044079Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.050635Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.057916Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.059465Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.067832Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.071927Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.080207Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.086719Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.093024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.097465Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.101374Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.109264Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.116972Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.125505Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.130408Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.134801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.139524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.148174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.155351Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.156010Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:26.193060Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:05:26 up 7 min,  0 users,  load average: 0.16, 0.26, 0.14
	Linux ha-347193 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5] <==
	I0920 18:04:47.652746       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:04:57.660425       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:04:57.660717       1 main.go:299] handling current node
	I0920 18:04:57.660765       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:04:57.660804       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:04:57.661001       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:04:57.661174       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:04:57.662429       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:04:57.662486       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:05:07.652020       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:05:07.652151       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:05:07.652431       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:05:07.652468       1 main.go:299] handling current node
	I0920 18:05:07.652492       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:05:07.652521       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:05:07.652622       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:05:07.652641       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:05:17.653044       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:05:17.653093       1 main.go:299] handling current node
	I0920 18:05:17.653117       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:05:17.653124       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:05:17.653356       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:05:17.653380       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:05:17.653452       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:05:17.653459       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4] <==
	I0920 17:59:00.040430       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0920 17:59:00.047869       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.246]
	I0920 17:59:00.048849       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 17:59:00.053986       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 17:59:00.261870       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 17:59:01.502885       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 17:59:01.521849       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0920 17:59:01.592823       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 17:59:05.362201       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0920 17:59:05.964721       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0920 18:01:45.789676       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35350: use of closed network connection
	E0920 18:01:45.987221       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35370: use of closed network connection
	E0920 18:01:46.203136       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35382: use of closed network connection
	E0920 18:01:46.410018       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35390: use of closed network connection
	E0920 18:01:46.596914       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35412: use of closed network connection
	E0920 18:01:46.785733       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35422: use of closed network connection
	E0920 18:01:46.963707       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35432: use of closed network connection
	E0920 18:01:47.352644       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35476: use of closed network connection
	E0920 18:01:47.677101       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35494: use of closed network connection
	E0920 18:01:47.852966       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35506: use of closed network connection
	E0920 18:01:48.037422       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35530: use of closed network connection
	E0920 18:01:48.215519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35552: use of closed network connection
	E0920 18:01:48.395158       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35562: use of closed network connection
	E0920 18:01:48.571105       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35590: use of closed network connection
	W0920 18:03:10.061403       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.246 192.168.39.250]
	
	
	==> kube-controller-manager [5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09] <==
	I0920 18:02:18.858784       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:18.864042       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:18.983762       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	E0920 18:02:18.993581       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"f4761ca4-6943-48ac-a03b-0da33530a65b\", ResourceVersion:\"914\", Generation:1, CreationTimestamp:time.Date(2024, time.September, 20, 17, 59, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0025004a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\
", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource
)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00281e400), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0024c5650), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolum
eSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVo
lumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0024c5668), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtua
lDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.31.1\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc0025004e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Res
ourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\
"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0026a90e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002a3e7a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002939a00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Host
Alias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002991a60)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002a3e800)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfille
d on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0920 18:02:19.395187       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:19.617626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:19.719171       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:19.749178       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:20.458068       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-347193-m04"
	I0920 18:02:20.458612       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:20.582907       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:28.989089       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:38.284793       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-347193-m04"
	I0920 18:02:38.284953       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:38.304718       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:39.566872       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:49.271755       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:03:30.485154       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-347193-m04"
	I0920 18:03:30.485472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	I0920 18:03:30.507411       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	I0920 18:03:30.648029       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="83.389173ms"
	I0920 18:03:30.648163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.297µs"
	I0920 18:03:34.617103       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	I0920 18:03:35.794110       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	
	
	==> kube-proxy [54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:59:08.146402       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:59:08.169465       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.246"]
	E0920 17:59:08.169636       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:59:08.200549       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:59:08.200672       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:59:08.200715       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:59:08.203687       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:59:08.204074       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:59:08.204250       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:59:08.207892       1 config.go:199] "Starting service config controller"
	I0920 17:59:08.208388       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:59:08.208680       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:59:08.211000       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:59:08.208820       1 config.go:328] "Starting node config controller"
	I0920 17:59:08.211110       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:59:08.308818       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:59:08.311223       1 shared_informer.go:320] Caches are synced for node config
	I0920 17:59:08.311448       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f] <==
	W0920 17:58:59.136078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 17:58:59.136125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.152907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 17:58:59.152970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.232222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:58:59.232522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.417181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 17:58:59.417310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.425477       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 17:58:59.426116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.425550       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:58:59.426253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.487540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:58:59.487590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.537813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:58:59.537936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.543453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:58:59.543567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.650341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:58:59.650386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 17:59:01.377349       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 18:02:18.846875       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-t5f94\": pod kindnet-t5f94 is already assigned to node \"ha-347193-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-t5f94" node="ha-347193-m04"
	E0920 18:02:18.847041       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 33dab94e-9da4-4a58-83f6-a7a351c8c216(kube-system/kindnet-t5f94) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-t5f94"
	E0920 18:02:18.847081       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-t5f94\": pod kindnet-t5f94 is already assigned to node \"ha-347193-m04\"" pod="kube-system/kindnet-t5f94"
	I0920 18:02:18.847108       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-t5f94" node="ha-347193-m04"
	
	
	==> kubelet <==
	Sep 20 18:04:01 ha-347193 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:04:01 ha-347193 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:04:01 ha-347193 kubelet[1310]: E0920 18:04:01.734604    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855441733710654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:01 ha-347193 kubelet[1310]: E0920 18:04:01.734645    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855441733710654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:11 ha-347193 kubelet[1310]: E0920 18:04:11.739024    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855451738567159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:11 ha-347193 kubelet[1310]: E0920 18:04:11.739486    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855451738567159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:21 ha-347193 kubelet[1310]: E0920 18:04:21.742025    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855461741705291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:21 ha-347193 kubelet[1310]: E0920 18:04:21.742425    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855461741705291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:31 ha-347193 kubelet[1310]: E0920 18:04:31.746728    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855471746051287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:31 ha-347193 kubelet[1310]: E0920 18:04:31.746767    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855471746051287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:41 ha-347193 kubelet[1310]: E0920 18:04:41.748167    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855481747854827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:41 ha-347193 kubelet[1310]: E0920 18:04:41.748208    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855481747854827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:51 ha-347193 kubelet[1310]: E0920 18:04:51.749843    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855491749438137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:51 ha-347193 kubelet[1310]: E0920 18:04:51.750125    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855491749438137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:01 ha-347193 kubelet[1310]: E0920 18:05:01.623246    1310 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:05:01 ha-347193 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:05:01 ha-347193 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:05:01 ha-347193 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:05:01 ha-347193 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:05:01 ha-347193 kubelet[1310]: E0920 18:05:01.752334    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855501751934283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:01 ha-347193 kubelet[1310]: E0920 18:05:01.752368    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855501751934283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:11 ha-347193 kubelet[1310]: E0920 18:05:11.756769    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855511755858702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:11 ha-347193 kubelet[1310]: E0920 18:05:11.757336    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855511755858702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:21 ha-347193 kubelet[1310]: E0920 18:05:21.759742    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855521759123751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:21 ha-347193 kubelet[1310]: E0920 18:05:21.759770    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855521759123751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-347193 -n ha-347193
helpers_test.go:261: (dbg) Run:  kubectl --context ha-347193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr: (3.908908149s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-347193 -n ha-347193
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-347193 logs -n 25: (1.410150143s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193:/home/docker/cp-test_ha-347193-m03_ha-347193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193 sudo cat                                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m03_ha-347193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m02:/home/docker/cp-test_ha-347193-m03_ha-347193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m02 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m03_ha-347193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04:/home/docker/cp-test_ha-347193-m03_ha-347193-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m04 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m03_ha-347193-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-347193 cp testdata/cp-test.txt                                                | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3833348347/001/cp-test_ha-347193-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193:/home/docker/cp-test_ha-347193-m04_ha-347193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193 sudo cat                                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m04_ha-347193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m02:/home/docker/cp-test_ha-347193-m04_ha-347193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m02 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m04_ha-347193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03:/home/docker/cp-test_ha-347193-m04_ha-347193-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m03 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m04_ha-347193-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-347193 node stop m02 -v=7                                                     | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-347193 node start m02 -v=7                                                    | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:58:19
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:58:19.719554  256536 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:58:19.719784  256536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:58:19.719792  256536 out.go:358] Setting ErrFile to fd 2...
	I0920 17:58:19.719796  256536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:58:19.719960  256536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 17:58:19.720540  256536 out.go:352] Setting JSON to false
	I0920 17:58:19.721444  256536 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6043,"bootTime":1726849057,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:58:19.721554  256536 start.go:139] virtualization: kvm guest
	I0920 17:58:19.723941  256536 out.go:177] * [ha-347193] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:58:19.725468  256536 notify.go:220] Checking for updates...
	I0920 17:58:19.725480  256536 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 17:58:19.727002  256536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:58:19.728644  256536 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:58:19.730001  256536 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:58:19.731378  256536 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:58:19.732922  256536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:58:19.734763  256536 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:58:19.774481  256536 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 17:58:19.776642  256536 start.go:297] selected driver: kvm2
	I0920 17:58:19.776667  256536 start.go:901] validating driver "kvm2" against <nil>
	I0920 17:58:19.776681  256536 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:58:19.777528  256536 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:58:19.777634  256536 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:58:19.794619  256536 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:58:19.795141  256536 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:58:19.795583  256536 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:58:19.795675  256536 cni.go:84] Creating CNI manager for ""
	I0920 17:58:19.795761  256536 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 17:58:19.795792  256536 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 17:58:19.795946  256536 start.go:340] cluster config:
	{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0920 17:58:19.796187  256536 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:58:19.798837  256536 out.go:177] * Starting "ha-347193" primary control-plane node in "ha-347193" cluster
	I0920 17:58:19.800296  256536 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:58:19.800352  256536 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 17:58:19.800362  256536 cache.go:56] Caching tarball of preloaded images
	I0920 17:58:19.800459  256536 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:58:19.800470  256536 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:58:19.800790  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 17:58:19.800819  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json: {Name:mkfd3b988e8aa616e3cc88608f2502239f4ba220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:19.800990  256536 start.go:360] acquireMachinesLock for ha-347193: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:58:19.801023  256536 start.go:364] duration metric: took 17.719µs to acquireMachinesLock for "ha-347193"
	I0920 17:58:19.801041  256536 start.go:93] Provisioning new machine with config: &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:58:19.801110  256536 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 17:58:19.803289  256536 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:58:19.803488  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:58:19.803546  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:58:19.819050  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41053
	I0920 17:58:19.819630  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:58:19.820279  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:58:19.820296  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:58:19.820691  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:58:19.820938  256536 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 17:58:19.821115  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:19.821335  256536 start.go:159] libmachine.API.Create for "ha-347193" (driver="kvm2")
	I0920 17:58:19.821366  256536 client.go:168] LocalClient.Create starting
	I0920 17:58:19.821397  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 17:58:19.821431  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 17:58:19.821444  256536 main.go:141] libmachine: Parsing certificate...
	I0920 17:58:19.821515  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 17:58:19.821537  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 17:58:19.821546  256536 main.go:141] libmachine: Parsing certificate...
	I0920 17:58:19.821560  256536 main.go:141] libmachine: Running pre-create checks...
	I0920 17:58:19.821570  256536 main.go:141] libmachine: (ha-347193) Calling .PreCreateCheck
	I0920 17:58:19.821998  256536 main.go:141] libmachine: (ha-347193) Calling .GetConfigRaw
	I0920 17:58:19.822485  256536 main.go:141] libmachine: Creating machine...
	I0920 17:58:19.822507  256536 main.go:141] libmachine: (ha-347193) Calling .Create
	I0920 17:58:19.822712  256536 main.go:141] libmachine: (ha-347193) Creating KVM machine...
	I0920 17:58:19.824224  256536 main.go:141] libmachine: (ha-347193) DBG | found existing default KVM network
	I0920 17:58:19.824984  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:19.824842  256559 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000221330}
	I0920 17:58:19.825024  256536 main.go:141] libmachine: (ha-347193) DBG | created network xml: 
	I0920 17:58:19.825037  256536 main.go:141] libmachine: (ha-347193) DBG | <network>
	I0920 17:58:19.825044  256536 main.go:141] libmachine: (ha-347193) DBG |   <name>mk-ha-347193</name>
	I0920 17:58:19.825049  256536 main.go:141] libmachine: (ha-347193) DBG |   <dns enable='no'/>
	I0920 17:58:19.825054  256536 main.go:141] libmachine: (ha-347193) DBG |   
	I0920 17:58:19.825061  256536 main.go:141] libmachine: (ha-347193) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 17:58:19.825067  256536 main.go:141] libmachine: (ha-347193) DBG |     <dhcp>
	I0920 17:58:19.825072  256536 main.go:141] libmachine: (ha-347193) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 17:58:19.825079  256536 main.go:141] libmachine: (ha-347193) DBG |     </dhcp>
	I0920 17:58:19.825084  256536 main.go:141] libmachine: (ha-347193) DBG |   </ip>
	I0920 17:58:19.825090  256536 main.go:141] libmachine: (ha-347193) DBG |   
	I0920 17:58:19.825094  256536 main.go:141] libmachine: (ha-347193) DBG | </network>
	I0920 17:58:19.825099  256536 main.go:141] libmachine: (ha-347193) DBG | 
	I0920 17:58:19.830808  256536 main.go:141] libmachine: (ha-347193) DBG | trying to create private KVM network mk-ha-347193 192.168.39.0/24...
	I0920 17:58:19.907893  256536 main.go:141] libmachine: (ha-347193) DBG | private KVM network mk-ha-347193 192.168.39.0/24 created
	I0920 17:58:19.907950  256536 main.go:141] libmachine: (ha-347193) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193 ...
	I0920 17:58:19.907968  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:19.907787  256559 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:58:19.907992  256536 main.go:141] libmachine: (ha-347193) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 17:58:19.908014  256536 main.go:141] libmachine: (ha-347193) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 17:58:20.183507  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:20.183335  256559 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa...
	I0920 17:58:20.394510  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:20.394309  256559 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/ha-347193.rawdisk...
	I0920 17:58:20.394561  256536 main.go:141] libmachine: (ha-347193) DBG | Writing magic tar header
	I0920 17:58:20.394576  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193 (perms=drwx------)
	I0920 17:58:20.394593  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:58:20.394599  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 17:58:20.394610  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 17:58:20.394615  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:58:20.394629  256536 main.go:141] libmachine: (ha-347193) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:58:20.394637  256536 main.go:141] libmachine: (ha-347193) DBG | Writing SSH key tar header
	I0920 17:58:20.394645  256536 main.go:141] libmachine: (ha-347193) Creating domain...
	I0920 17:58:20.394695  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:20.394434  256559 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193 ...
	I0920 17:58:20.394726  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193
	I0920 17:58:20.394740  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 17:58:20.394750  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:58:20.394760  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 17:58:20.394766  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:58:20.394776  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:58:20.394781  256536 main.go:141] libmachine: (ha-347193) DBG | Checking permissions on dir: /home
	I0920 17:58:20.394791  256536 main.go:141] libmachine: (ha-347193) DBG | Skipping /home - not owner
	I0920 17:58:20.396055  256536 main.go:141] libmachine: (ha-347193) define libvirt domain using xml: 
	I0920 17:58:20.396079  256536 main.go:141] libmachine: (ha-347193) <domain type='kvm'>
	I0920 17:58:20.396085  256536 main.go:141] libmachine: (ha-347193)   <name>ha-347193</name>
	I0920 17:58:20.396090  256536 main.go:141] libmachine: (ha-347193)   <memory unit='MiB'>2200</memory>
	I0920 17:58:20.396095  256536 main.go:141] libmachine: (ha-347193)   <vcpu>2</vcpu>
	I0920 17:58:20.396099  256536 main.go:141] libmachine: (ha-347193)   <features>
	I0920 17:58:20.396104  256536 main.go:141] libmachine: (ha-347193)     <acpi/>
	I0920 17:58:20.396108  256536 main.go:141] libmachine: (ha-347193)     <apic/>
	I0920 17:58:20.396113  256536 main.go:141] libmachine: (ha-347193)     <pae/>
	I0920 17:58:20.396121  256536 main.go:141] libmachine: (ha-347193)     
	I0920 17:58:20.396125  256536 main.go:141] libmachine: (ha-347193)   </features>
	I0920 17:58:20.396130  256536 main.go:141] libmachine: (ha-347193)   <cpu mode='host-passthrough'>
	I0920 17:58:20.396135  256536 main.go:141] libmachine: (ha-347193)   
	I0920 17:58:20.396139  256536 main.go:141] libmachine: (ha-347193)   </cpu>
	I0920 17:58:20.396144  256536 main.go:141] libmachine: (ha-347193)   <os>
	I0920 17:58:20.396150  256536 main.go:141] libmachine: (ha-347193)     <type>hvm</type>
	I0920 17:58:20.396155  256536 main.go:141] libmachine: (ha-347193)     <boot dev='cdrom'/>
	I0920 17:58:20.396161  256536 main.go:141] libmachine: (ha-347193)     <boot dev='hd'/>
	I0920 17:58:20.396220  256536 main.go:141] libmachine: (ha-347193)     <bootmenu enable='no'/>
	I0920 17:58:20.396253  256536 main.go:141] libmachine: (ha-347193)   </os>
	I0920 17:58:20.396265  256536 main.go:141] libmachine: (ha-347193)   <devices>
	I0920 17:58:20.396277  256536 main.go:141] libmachine: (ha-347193)     <disk type='file' device='cdrom'>
	I0920 17:58:20.396294  256536 main.go:141] libmachine: (ha-347193)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/boot2docker.iso'/>
	I0920 17:58:20.396309  256536 main.go:141] libmachine: (ha-347193)       <target dev='hdc' bus='scsi'/>
	I0920 17:58:20.396321  256536 main.go:141] libmachine: (ha-347193)       <readonly/>
	I0920 17:58:20.396335  256536 main.go:141] libmachine: (ha-347193)     </disk>
	I0920 17:58:20.396350  256536 main.go:141] libmachine: (ha-347193)     <disk type='file' device='disk'>
	I0920 17:58:20.396362  256536 main.go:141] libmachine: (ha-347193)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:58:20.396376  256536 main.go:141] libmachine: (ha-347193)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/ha-347193.rawdisk'/>
	I0920 17:58:20.396387  256536 main.go:141] libmachine: (ha-347193)       <target dev='hda' bus='virtio'/>
	I0920 17:58:20.396398  256536 main.go:141] libmachine: (ha-347193)     </disk>
	I0920 17:58:20.396413  256536 main.go:141] libmachine: (ha-347193)     <interface type='network'>
	I0920 17:58:20.396427  256536 main.go:141] libmachine: (ha-347193)       <source network='mk-ha-347193'/>
	I0920 17:58:20.396437  256536 main.go:141] libmachine: (ha-347193)       <model type='virtio'/>
	I0920 17:58:20.396449  256536 main.go:141] libmachine: (ha-347193)     </interface>
	I0920 17:58:20.396460  256536 main.go:141] libmachine: (ha-347193)     <interface type='network'>
	I0920 17:58:20.396470  256536 main.go:141] libmachine: (ha-347193)       <source network='default'/>
	I0920 17:58:20.396484  256536 main.go:141] libmachine: (ha-347193)       <model type='virtio'/>
	I0920 17:58:20.396495  256536 main.go:141] libmachine: (ha-347193)     </interface>
	I0920 17:58:20.396502  256536 main.go:141] libmachine: (ha-347193)     <serial type='pty'>
	I0920 17:58:20.396514  256536 main.go:141] libmachine: (ha-347193)       <target port='0'/>
	I0920 17:58:20.396524  256536 main.go:141] libmachine: (ha-347193)     </serial>
	I0920 17:58:20.396535  256536 main.go:141] libmachine: (ha-347193)     <console type='pty'>
	I0920 17:58:20.396546  256536 main.go:141] libmachine: (ha-347193)       <target type='serial' port='0'/>
	I0920 17:58:20.396570  256536 main.go:141] libmachine: (ha-347193)     </console>
	I0920 17:58:20.396588  256536 main.go:141] libmachine: (ha-347193)     <rng model='virtio'>
	I0920 17:58:20.396595  256536 main.go:141] libmachine: (ha-347193)       <backend model='random'>/dev/random</backend>
	I0920 17:58:20.396604  256536 main.go:141] libmachine: (ha-347193)     </rng>
	I0920 17:58:20.396635  256536 main.go:141] libmachine: (ha-347193)     
	I0920 17:58:20.396657  256536 main.go:141] libmachine: (ha-347193)     
	I0920 17:58:20.396672  256536 main.go:141] libmachine: (ha-347193)   </devices>
	I0920 17:58:20.396680  256536 main.go:141] libmachine: (ha-347193) </domain>
	I0920 17:58:20.396699  256536 main.go:141] libmachine: (ha-347193) 
	I0920 17:58:20.401190  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:83:b4:8d in network default
	I0920 17:58:20.401745  256536 main.go:141] libmachine: (ha-347193) Ensuring networks are active...
	I0920 17:58:20.401764  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:20.402424  256536 main.go:141] libmachine: (ha-347193) Ensuring network default is active
	I0920 17:58:20.402677  256536 main.go:141] libmachine: (ha-347193) Ensuring network mk-ha-347193 is active
	I0920 17:58:20.403127  256536 main.go:141] libmachine: (ha-347193) Getting domain xml...
	I0920 17:58:20.403705  256536 main.go:141] libmachine: (ha-347193) Creating domain...
	I0920 17:58:21.630872  256536 main.go:141] libmachine: (ha-347193) Waiting to get IP...
	I0920 17:58:21.631658  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:21.632047  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:21.632073  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:21.632024  256559 retry.go:31] will retry after 215.475523ms: waiting for machine to come up
	I0920 17:58:21.849753  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:21.850279  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:21.850310  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:21.850240  256559 retry.go:31] will retry after 263.201454ms: waiting for machine to come up
	I0920 17:58:22.114802  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:22.115310  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:22.115338  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:22.115259  256559 retry.go:31] will retry after 445.148422ms: waiting for machine to come up
	I0920 17:58:22.562073  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:22.562548  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:22.562573  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:22.562510  256559 retry.go:31] will retry after 558.224345ms: waiting for machine to come up
	I0920 17:58:23.122632  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:23.123096  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:23.123123  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:23.123050  256559 retry.go:31] will retry after 528.914105ms: waiting for machine to come up
	I0920 17:58:23.654056  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:23.654437  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:23.654467  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:23.654380  256559 retry.go:31] will retry after 657.509004ms: waiting for machine to come up
	I0920 17:58:24.313318  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:24.313802  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:24.313857  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:24.313765  256559 retry.go:31] will retry after 757.318604ms: waiting for machine to come up
	I0920 17:58:25.072515  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:25.072965  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:25.072995  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:25.072907  256559 retry.go:31] will retry after 1.361384929s: waiting for machine to come up
	I0920 17:58:26.435555  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:26.436017  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:26.436061  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:26.435982  256559 retry.go:31] will retry after 1.541186599s: waiting for machine to come up
	I0920 17:58:27.979940  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:27.980429  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:27.980460  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:27.980357  256559 retry.go:31] will retry after 1.786301166s: waiting for machine to come up
	I0920 17:58:29.767912  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:29.768468  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:29.768491  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:29.768439  256559 retry.go:31] will retry after 1.809883951s: waiting for machine to come up
	I0920 17:58:31.581113  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:31.581588  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:31.581619  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:31.581535  256559 retry.go:31] will retry after 3.405747274s: waiting for machine to come up
	I0920 17:58:34.988932  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:34.989387  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:34.989410  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:34.989369  256559 retry.go:31] will retry after 3.845362816s: waiting for machine to come up
	I0920 17:58:38.839191  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:38.839734  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find current IP address of domain ha-347193 in network mk-ha-347193
	I0920 17:58:38.839759  256536 main.go:141] libmachine: (ha-347193) DBG | I0920 17:58:38.839690  256559 retry.go:31] will retry after 3.611631644s: waiting for machine to come up
	I0920 17:58:42.454482  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.454977  256536 main.go:141] libmachine: (ha-347193) Found IP for machine: 192.168.39.246
	I0920 17:58:42.455003  256536 main.go:141] libmachine: (ha-347193) Reserving static IP address...
	I0920 17:58:42.455016  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has current primary IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.455495  256536 main.go:141] libmachine: (ha-347193) DBG | unable to find host DHCP lease matching {name: "ha-347193", mac: "52:54:00:2e:07:bb", ip: "192.168.39.246"} in network mk-ha-347193
	I0920 17:58:42.533022  256536 main.go:141] libmachine: (ha-347193) DBG | Getting to WaitForSSH function...
	I0920 17:58:42.533056  256536 main.go:141] libmachine: (ha-347193) Reserved static IP address: 192.168.39.246
	I0920 17:58:42.533070  256536 main.go:141] libmachine: (ha-347193) Waiting for SSH to be available...
	I0920 17:58:42.535894  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.536329  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:42.536361  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.536501  256536 main.go:141] libmachine: (ha-347193) DBG | Using SSH client type: external
	I0920 17:58:42.536525  256536 main.go:141] libmachine: (ha-347193) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa (-rw-------)
	I0920 17:58:42.536553  256536 main.go:141] libmachine: (ha-347193) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:58:42.536592  256536 main.go:141] libmachine: (ha-347193) DBG | About to run SSH command:
	I0920 17:58:42.536627  256536 main.go:141] libmachine: (ha-347193) DBG | exit 0
	I0920 17:58:42.662095  256536 main.go:141] libmachine: (ha-347193) DBG | SSH cmd err, output: <nil>: 
	I0920 17:58:42.662356  256536 main.go:141] libmachine: (ha-347193) KVM machine creation complete!
	I0920 17:58:42.662742  256536 main.go:141] libmachine: (ha-347193) Calling .GetConfigRaw
	I0920 17:58:42.663393  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:42.663609  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:42.663783  256536 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:58:42.663799  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 17:58:42.665335  256536 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:58:42.665349  256536 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:58:42.665355  256536 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:58:42.665361  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:42.667970  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.668505  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:42.668538  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.668703  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:42.668963  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.669124  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.669264  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:42.669457  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:42.669727  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:42.669743  256536 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:58:42.777219  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:58:42.777243  256536 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:58:42.777251  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:42.779860  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.780225  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:42.780252  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.780402  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:42.780602  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.780743  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.780837  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:42.781037  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:42.781263  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:42.781279  256536 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:58:42.886633  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:58:42.886732  256536 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:58:42.886747  256536 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:58:42.886757  256536 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 17:58:42.887046  256536 buildroot.go:166] provisioning hostname "ha-347193"
	I0920 17:58:42.887073  256536 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 17:58:42.887313  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:42.889831  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.890182  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:42.890207  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:42.890355  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:42.890545  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.890718  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:42.890846  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:42.891093  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:42.891253  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:42.891265  256536 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-347193 && echo "ha-347193" | sudo tee /etc/hostname
	I0920 17:58:43.011225  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-347193
	
	I0920 17:58:43.011253  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.014324  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.014803  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.014831  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.015003  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.015234  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.015466  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.015676  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.015888  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:43.016055  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:43.016070  256536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-347193' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-347193/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-347193' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:58:43.130242  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:58:43.130286  256536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 17:58:43.130357  256536 buildroot.go:174] setting up certificates
	I0920 17:58:43.130379  256536 provision.go:84] configureAuth start
	I0920 17:58:43.130401  256536 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 17:58:43.130726  256536 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 17:58:43.133505  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.133825  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.133848  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.134052  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.136401  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.136730  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.136750  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.136952  256536 provision.go:143] copyHostCerts
	I0920 17:58:43.136981  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 17:58:43.137013  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 17:58:43.137030  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 17:58:43.137096  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 17:58:43.137174  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 17:58:43.137193  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 17:58:43.137199  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 17:58:43.137223  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 17:58:43.137264  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 17:58:43.137284  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 17:58:43.137292  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 17:58:43.137312  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 17:58:43.137361  256536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.ha-347193 san=[127.0.0.1 192.168.39.246 ha-347193 localhost minikube]
	I0920 17:58:43.262974  256536 provision.go:177] copyRemoteCerts
	I0920 17:58:43.263055  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:58:43.263085  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.265602  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.265934  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.265962  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.266136  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.266349  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.266507  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.266640  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:58:43.348226  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:58:43.348355  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 17:58:43.371291  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:58:43.371380  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 17:58:43.393409  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:58:43.393490  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:58:43.417165  256536 provision.go:87] duration metric: took 286.759784ms to configureAuth
	I0920 17:58:43.417200  256536 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:58:43.417422  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:58:43.417508  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.420548  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.420826  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.420856  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.421056  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.421256  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.421438  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.421576  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.421745  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:43.422081  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:43.422105  256536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:58:43.638028  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:58:43.638062  256536 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:58:43.638075  256536 main.go:141] libmachine: (ha-347193) Calling .GetURL
	I0920 17:58:43.639465  256536 main.go:141] libmachine: (ha-347193) DBG | Using libvirt version 6000000
	I0920 17:58:43.641835  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.642260  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.642284  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.642472  256536 main.go:141] libmachine: Docker is up and running!
	I0920 17:58:43.642489  256536 main.go:141] libmachine: Reticulating splines...
	I0920 17:58:43.642498  256536 client.go:171] duration metric: took 23.821123659s to LocalClient.Create
	I0920 17:58:43.642520  256536 start.go:167] duration metric: took 23.821189376s to libmachine.API.Create "ha-347193"
	I0920 17:58:43.642527  256536 start.go:293] postStartSetup for "ha-347193" (driver="kvm2")
	I0920 17:58:43.642537  256536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:58:43.642552  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:43.642767  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:58:43.642797  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.645726  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.646207  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.646228  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.646384  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.646562  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.646731  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.646875  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:58:43.732855  256536 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:58:43.737146  256536 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:58:43.737179  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 17:58:43.737266  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 17:58:43.737348  256536 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 17:58:43.737360  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /etc/ssl/certs/2448492.pem
	I0920 17:58:43.737457  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:58:43.746873  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 17:58:43.769680  256536 start.go:296] duration metric: took 127.135312ms for postStartSetup
	I0920 17:58:43.769753  256536 main.go:141] libmachine: (ha-347193) Calling .GetConfigRaw
	I0920 17:58:43.770539  256536 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 17:58:43.773368  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.773790  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.773812  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.774131  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 17:58:43.774327  256536 start.go:128] duration metric: took 23.973205594s to createHost
	I0920 17:58:43.774352  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.776811  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.777154  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.777173  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.777359  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.777566  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.777714  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.777851  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.778046  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:58:43.778254  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 17:58:43.778275  256536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:58:43.886468  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855123.865489975
	
	I0920 17:58:43.886492  256536 fix.go:216] guest clock: 1726855123.865489975
	I0920 17:58:43.886500  256536 fix.go:229] Guest: 2024-09-20 17:58:43.865489975 +0000 UTC Remote: 2024-09-20 17:58:43.77433865 +0000 UTC m=+24.090830996 (delta=91.151325ms)
	I0920 17:58:43.886521  256536 fix.go:200] guest clock delta is within tolerance: 91.151325ms
	I0920 17:58:43.886526  256536 start.go:83] releasing machines lock for "ha-347193", held for 24.085494311s
	I0920 17:58:43.886548  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:43.886838  256536 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 17:58:43.889513  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.889872  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.889896  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.890072  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:43.890584  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:43.890771  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:58:43.890844  256536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:58:43.890926  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.890977  256536 ssh_runner.go:195] Run: cat /version.json
	I0920 17:58:43.891005  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:58:43.893664  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.894009  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.894036  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.894186  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.894206  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.894370  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.894560  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:43.894569  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.894586  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:43.894713  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:58:43.894782  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:58:43.894935  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:58:43.895088  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:58:43.895207  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:58:43.976109  256536 ssh_runner.go:195] Run: systemctl --version
	I0920 17:58:44.018728  256536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:58:44.175337  256536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:58:44.181194  256536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:58:44.181279  256536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:58:44.199685  256536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:58:44.199719  256536 start.go:495] detecting cgroup driver to use...
	I0920 17:58:44.199799  256536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:58:44.215955  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:58:44.230482  256536 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:58:44.230549  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:58:44.244728  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:58:44.258137  256536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:58:44.370456  256536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:58:44.514103  256536 docker.go:233] disabling docker service ...
	I0920 17:58:44.514175  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:58:44.536863  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:58:44.550231  256536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:58:44.683486  256536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:58:44.793154  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:58:44.806166  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:58:44.823607  256536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:58:44.823754  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.833725  256536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:58:44.833789  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.843703  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.853327  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.862729  256536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:58:44.872472  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.882312  256536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.898952  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:58:44.908482  256536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:58:44.917186  256536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:58:44.917249  256536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:58:44.928614  256536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:58:44.938764  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:58:45.045827  256536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:58:45.135797  256536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:58:45.135868  256536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:58:45.140339  256536 start.go:563] Will wait 60s for crictl version
	I0920 17:58:45.140407  256536 ssh_runner.go:195] Run: which crictl
	I0920 17:58:45.144096  256536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:58:45.187435  256536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:58:45.187543  256536 ssh_runner.go:195] Run: crio --version
	I0920 17:58:45.213699  256536 ssh_runner.go:195] Run: crio --version
	I0920 17:58:45.242965  256536 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:58:45.244260  256536 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 17:58:45.247006  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:45.247310  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:58:45.247334  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:58:45.247515  256536 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:58:45.251447  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:58:45.263292  256536 kubeadm.go:883] updating cluster {Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:58:45.263401  256536 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:58:45.263455  256536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:58:45.293889  256536 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 17:58:45.293981  256536 ssh_runner.go:195] Run: which lz4
	I0920 17:58:45.297564  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0920 17:58:45.297677  256536 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 17:58:45.301429  256536 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 17:58:45.301465  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 17:58:46.526820  256536 crio.go:462] duration metric: took 1.229164304s to copy over tarball
	I0920 17:58:46.526906  256536 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 17:58:48.552055  256536 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.025114598s)
	I0920 17:58:48.552091  256536 crio.go:469] duration metric: took 2.025229025s to extract the tarball
	I0920 17:58:48.552101  256536 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 17:58:48.595514  256536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:58:48.637483  256536 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 17:58:48.637509  256536 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:58:48.637517  256536 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.31.1 crio true true} ...
	I0920 17:58:48.637615  256536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-347193 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:58:48.637681  256536 ssh_runner.go:195] Run: crio config
	I0920 17:58:48.685785  256536 cni.go:84] Creating CNI manager for ""
	I0920 17:58:48.685807  256536 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 17:58:48.685817  256536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:58:48.685841  256536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-347193 NodeName:ha-347193 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:58:48.686000  256536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-347193"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:58:48.686029  256536 kube-vip.go:115] generating kube-vip config ...
	I0920 17:58:48.686069  256536 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:58:48.702147  256536 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:58:48.702255  256536 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:58:48.702306  256536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:58:48.711975  256536 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:58:48.712116  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 17:58:48.721456  256536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 17:58:48.737853  256536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:58:48.754664  256536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 17:58:48.771220  256536 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0920 17:58:48.786667  256536 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:58:48.790274  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:58:48.802824  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:58:48.920298  256536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:58:48.937204  256536 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193 for IP: 192.168.39.246
	I0920 17:58:48.937241  256536 certs.go:194] generating shared ca certs ...
	I0920 17:58:48.937263  256536 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:48.937423  256536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 17:58:48.937475  256536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 17:58:48.937490  256536 certs.go:256] generating profile certs ...
	I0920 17:58:48.937561  256536 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key
	I0920 17:58:48.937579  256536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt with IP's: []
	I0920 17:58:49.084514  256536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt ...
	I0920 17:58:49.084549  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt: {Name:mk13d47d95d81e73445ca468d2d07a6230b36ca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.084751  256536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key ...
	I0920 17:58:49.084769  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key: {Name:mk2e8c8a89fbce74c4a6cf70a50b1649d0b0d470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.084875  256536 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.3b9d2b82
	I0920 17:58:49.084895  256536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.3b9d2b82 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.254]
	I0920 17:58:49.268687  256536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.3b9d2b82 ...
	I0920 17:58:49.268724  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.3b9d2b82: {Name:mkc4d8dcb610e2c55a07bec95a2587e189c4dfcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.268922  256536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.3b9d2b82 ...
	I0920 17:58:49.268941  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.3b9d2b82: {Name:mk97e4ea20b46f77acfe6f051b666b6376a68732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.269045  256536 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.3b9d2b82 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt
	I0920 17:58:49.269140  256536 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.3b9d2b82 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key
	I0920 17:58:49.269224  256536 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key
	I0920 17:58:49.269247  256536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt with IP's: []
	I0920 17:58:49.848819  256536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt ...
	I0920 17:58:49.848866  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt: {Name:mk6162fd8372a3b1149ed5cf0cc51090f3274530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.849075  256536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key ...
	I0920 17:58:49.849088  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key: {Name:mk1d07a6aa2e0b7041a110499c13eb6b4fb89fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:58:49.849167  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:58:49.849186  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:58:49.849200  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:58:49.849215  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:58:49.849230  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:58:49.849245  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:58:49.849263  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:58:49.849275  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:58:49.849331  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 17:58:49.849370  256536 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 17:58:49.849382  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:58:49.849407  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 17:58:49.849435  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:58:49.849460  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 17:58:49.849503  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 17:58:49.849533  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:58:49.849550  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem -> /usr/share/ca-certificates/244849.pem
	I0920 17:58:49.849572  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /usr/share/ca-certificates/2448492.pem
	I0920 17:58:49.850129  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:58:49.878422  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:58:49.902242  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:58:49.926391  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 17:58:49.950027  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 17:58:49.972641  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 17:58:49.997022  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:58:50.021804  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:58:50.045879  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:58:50.069136  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 17:58:50.092444  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 17:58:50.116716  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:58:50.136353  256536 ssh_runner.go:195] Run: openssl version
	I0920 17:58:50.145863  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:58:50.157513  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:58:50.162700  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:58:50.162778  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:58:50.168948  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:58:50.180125  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 17:58:50.192366  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 17:58:50.197085  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 17:58:50.197163  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 17:58:50.203424  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 17:58:50.216229  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 17:58:50.228077  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 17:58:50.233241  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 17:58:50.233312  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 17:58:50.240012  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:58:50.251599  256536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:58:50.256160  256536 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:58:50.256224  256536 kubeadm.go:392] StartCluster: {Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:58:50.256322  256536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 17:58:50.256375  256536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 17:58:50.298938  256536 cri.go:89] found id: ""
	I0920 17:58:50.299007  256536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:58:50.309387  256536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:58:50.319684  256536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:58:50.330318  256536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:58:50.330339  256536 kubeadm.go:157] found existing configuration files:
	
	I0920 17:58:50.330388  256536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:58:50.339356  256536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:58:50.339424  256536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:58:50.348952  256536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:58:50.357964  256536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:58:50.358028  256536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:58:50.367163  256536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:58:50.376370  256536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:58:50.376452  256536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:58:50.385926  256536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:58:50.395143  256536 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:58:50.395230  256536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:58:50.405341  256536 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 17:58:50.519254  256536 kubeadm.go:310] W0920 17:58:50.504659     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:58:50.520220  256536 kubeadm.go:310] W0920 17:58:50.505817     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:58:50.645093  256536 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:59:01.982945  256536 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 17:59:01.983025  256536 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 17:59:01.983103  256536 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 17:59:01.983216  256536 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 17:59:01.983302  256536 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 17:59:01.983352  256536 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 17:59:01.985269  256536 out.go:235]   - Generating certificates and keys ...
	I0920 17:59:01.985356  256536 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 17:59:01.985409  256536 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 17:59:01.985500  256536 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 17:59:01.985582  256536 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 17:59:01.985647  256536 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 17:59:01.985692  256536 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 17:59:01.985749  256536 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 17:59:01.985852  256536 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-347193 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I0920 17:59:01.985922  256536 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 17:59:01.986042  256536 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-347193 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I0920 17:59:01.986131  256536 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 17:59:01.986209  256536 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 17:59:01.986270  256536 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 17:59:01.986323  256536 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 17:59:01.986367  256536 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 17:59:01.986420  256536 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 17:59:01.986465  256536 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 17:59:01.986546  256536 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 17:59:01.986640  256536 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 17:59:01.986748  256536 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 17:59:01.986815  256536 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 17:59:01.988640  256536 out.go:235]   - Booting up control plane ...
	I0920 17:59:01.988728  256536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 17:59:01.988790  256536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 17:59:01.988846  256536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 17:59:01.988962  256536 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 17:59:01.989082  256536 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 17:59:01.989168  256536 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 17:59:01.989296  256536 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 17:59:01.989387  256536 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 17:59:01.989445  256536 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001806633s
	I0920 17:59:01.989505  256536 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 17:59:01.989576  256536 kubeadm.go:310] [api-check] The API server is healthy after 5.617049153s
	I0920 17:59:01.989696  256536 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 17:59:01.989803  256536 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 17:59:01.989858  256536 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 17:59:01.990057  256536 kubeadm.go:310] [mark-control-plane] Marking the node ha-347193 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 17:59:01.990116  256536 kubeadm.go:310] [bootstrap-token] Using token: copxt9.xhya9dvcru2ncb8u
	I0920 17:59:01.991737  256536 out.go:235]   - Configuring RBAC rules ...
	I0920 17:59:01.991825  256536 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 17:59:01.991930  256536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 17:59:01.992134  256536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 17:59:01.992315  256536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 17:59:01.992430  256536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 17:59:01.992514  256536 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 17:59:01.992624  256536 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 17:59:01.992678  256536 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 17:59:01.992734  256536 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 17:59:01.992741  256536 kubeadm.go:310] 
	I0920 17:59:01.992825  256536 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 17:59:01.992833  256536 kubeadm.go:310] 
	I0920 17:59:01.992910  256536 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 17:59:01.992916  256536 kubeadm.go:310] 
	I0920 17:59:01.992954  256536 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 17:59:01.993039  256536 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 17:59:01.993097  256536 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 17:59:01.993103  256536 kubeadm.go:310] 
	I0920 17:59:01.993147  256536 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 17:59:01.993155  256536 kubeadm.go:310] 
	I0920 17:59:01.993208  256536 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 17:59:01.993218  256536 kubeadm.go:310] 
	I0920 17:59:01.993275  256536 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 17:59:01.993343  256536 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 17:59:01.993400  256536 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 17:59:01.993414  256536 kubeadm.go:310] 
	I0920 17:59:01.993487  256536 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 17:59:01.993558  256536 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 17:59:01.993564  256536 kubeadm.go:310] 
	I0920 17:59:01.993661  256536 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token copxt9.xhya9dvcru2ncb8u \
	I0920 17:59:01.993755  256536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 17:59:01.993786  256536 kubeadm.go:310] 	--control-plane 
	I0920 17:59:01.993795  256536 kubeadm.go:310] 
	I0920 17:59:01.993885  256536 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 17:59:01.993896  256536 kubeadm.go:310] 
	I0920 17:59:01.994008  256536 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token copxt9.xhya9dvcru2ncb8u \
	I0920 17:59:01.994126  256536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 17:59:01.994145  256536 cni.go:84] Creating CNI manager for ""
	I0920 17:59:01.994153  256536 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 17:59:01.995934  256536 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 17:59:01.997387  256536 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 17:59:02.002770  256536 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 17:59:02.002796  256536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 17:59:02.023932  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 17:59:02.397367  256536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 17:59:02.397459  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:02.397493  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-347193 minikube.k8s.io/updated_at=2024_09_20T17_59_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=ha-347193 minikube.k8s.io/primary=true
	I0920 17:59:02.423770  256536 ops.go:34] apiserver oom_adj: -16
	I0920 17:59:02.508023  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:03.008485  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:03.508182  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:04.008435  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:04.508089  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:05.009064  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:59:05.101282  256536 kubeadm.go:1113] duration metric: took 2.703897001s to wait for elevateKubeSystemPrivileges
	I0920 17:59:05.101325  256536 kubeadm.go:394] duration metric: took 14.845108845s to StartCluster
	I0920 17:59:05.101350  256536 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:59:05.101447  256536 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:59:05.102205  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:59:05.102460  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 17:59:05.102470  256536 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 17:59:05.102452  256536 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:59:05.102580  256536 addons.go:69] Setting default-storageclass=true in profile "ha-347193"
	I0920 17:59:05.102587  256536 start.go:241] waiting for startup goroutines ...
	I0920 17:59:05.102561  256536 addons.go:69] Setting storage-provisioner=true in profile "ha-347193"
	I0920 17:59:05.102601  256536 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-347193"
	I0920 17:59:05.102614  256536 addons.go:234] Setting addon storage-provisioner=true in "ha-347193"
	I0920 17:59:05.102655  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 17:59:05.102708  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:59:05.103073  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.103096  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.103105  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.103128  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.119041  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I0920 17:59:05.119120  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40515
	I0920 17:59:05.119527  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.119535  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.120054  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.120064  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.120077  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.120081  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.120411  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.120459  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.120594  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 17:59:05.120915  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.120945  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.123163  256536 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:59:05.123416  256536 kapi.go:59] client config for ha-347193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key", CAFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 17:59:05.123863  256536 cert_rotation.go:140] Starting client certificate rotation controller
	I0920 17:59:05.124188  256536 addons.go:234] Setting addon default-storageclass=true in "ha-347193"
	I0920 17:59:05.124232  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 17:59:05.124598  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.124630  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.136314  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I0920 17:59:05.136762  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.137268  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.137297  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.137618  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.137833  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 17:59:05.139657  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:59:05.139802  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38355
	I0920 17:59:05.140195  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.140708  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.140736  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.141146  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.141698  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.141724  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.141892  256536 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:59:05.143631  256536 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:59:05.143657  256536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:59:05.143686  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:59:05.146965  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:05.147514  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:59:05.147538  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:05.147705  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:59:05.147909  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:59:05.148047  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:59:05.148204  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:59:05.158393  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39527
	I0920 17:59:05.158953  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.159494  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.159527  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.159919  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.160100  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 17:59:05.161631  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:59:05.161924  256536 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:59:05.161945  256536 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:59:05.161964  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:59:05.164799  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:05.165159  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:59:05.165192  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:05.165404  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:59:05.165619  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:59:05.165790  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:59:05.165962  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:59:05.229095  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:59:05.299511  256536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:59:05.333515  256536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:59:05.572818  256536 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 17:59:05.872829  256536 main.go:141] libmachine: Making call to close driver server
	I0920 17:59:05.872867  256536 main.go:141] libmachine: (ha-347193) Calling .Close
	I0920 17:59:05.872944  256536 main.go:141] libmachine: Making call to close driver server
	I0920 17:59:05.872967  256536 main.go:141] libmachine: (ha-347193) Calling .Close
	I0920 17:59:05.873195  256536 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:59:05.873214  256536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:59:05.873224  256536 main.go:141] libmachine: Making call to close driver server
	I0920 17:59:05.873232  256536 main.go:141] libmachine: (ha-347193) Calling .Close
	I0920 17:59:05.873274  256536 main.go:141] libmachine: (ha-347193) DBG | Closing plugin on server side
	I0920 17:59:05.873310  256536 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:59:05.873317  256536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:59:05.873325  256536 main.go:141] libmachine: Making call to close driver server
	I0920 17:59:05.873332  256536 main.go:141] libmachine: (ha-347193) Calling .Close
	I0920 17:59:05.873517  256536 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:59:05.873541  256536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:59:05.873602  256536 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 17:59:05.873621  256536 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 17:59:05.873624  256536 main.go:141] libmachine: (ha-347193) DBG | Closing plugin on server side
	I0920 17:59:05.873718  256536 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:59:05.873742  256536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:59:05.873751  256536 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0920 17:59:05.873766  256536 round_trippers.go:469] Request Headers:
	I0920 17:59:05.873776  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:59:05.873785  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:59:05.888629  256536 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0920 17:59:05.889182  256536 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0920 17:59:05.889201  256536 round_trippers.go:469] Request Headers:
	I0920 17:59:05.889211  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:59:05.889215  256536 round_trippers.go:473]     Content-Type: application/json
	I0920 17:59:05.889223  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:59:05.892179  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 17:59:05.892357  256536 main.go:141] libmachine: Making call to close driver server
	I0920 17:59:05.892373  256536 main.go:141] libmachine: (ha-347193) Calling .Close
	I0920 17:59:05.892691  256536 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:59:05.892709  256536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:59:05.894279  256536 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0920 17:59:05.895496  256536 addons.go:510] duration metric: took 793.020671ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0920 17:59:05.895531  256536 start.go:246] waiting for cluster config update ...
	I0920 17:59:05.895542  256536 start.go:255] writing updated cluster config ...
	I0920 17:59:05.897257  256536 out.go:201] 
	I0920 17:59:05.898660  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:59:05.898730  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 17:59:05.900283  256536 out.go:177] * Starting "ha-347193-m02" control-plane node in "ha-347193" cluster
	I0920 17:59:05.901396  256536 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:59:05.901420  256536 cache.go:56] Caching tarball of preloaded images
	I0920 17:59:05.901510  256536 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 17:59:05.901521  256536 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 17:59:05.901597  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 17:59:05.901759  256536 start.go:360] acquireMachinesLock for ha-347193-m02: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:59:05.901802  256536 start.go:364] duration metric: took 24.671µs to acquireMachinesLock for "ha-347193-m02"
	I0920 17:59:05.901820  256536 start.go:93] Provisioning new machine with config: &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:59:05.901885  256536 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0920 17:59:05.903637  256536 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 17:59:05.903736  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:05.903765  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:05.919718  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37733
	I0920 17:59:05.920256  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:05.920760  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:05.920783  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:05.921213  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:05.921446  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetMachineName
	I0920 17:59:05.921623  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:05.921862  256536 start.go:159] libmachine.API.Create for "ha-347193" (driver="kvm2")
	I0920 17:59:05.921894  256536 client.go:168] LocalClient.Create starting
	I0920 17:59:05.921946  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 17:59:05.921992  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 17:59:05.922017  256536 main.go:141] libmachine: Parsing certificate...
	I0920 17:59:05.922095  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 17:59:05.922126  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 17:59:05.922142  256536 main.go:141] libmachine: Parsing certificate...
	I0920 17:59:05.922169  256536 main.go:141] libmachine: Running pre-create checks...
	I0920 17:59:05.922181  256536 main.go:141] libmachine: (ha-347193-m02) Calling .PreCreateCheck
	I0920 17:59:05.922398  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetConfigRaw
	I0920 17:59:05.922898  256536 main.go:141] libmachine: Creating machine...
	I0920 17:59:05.922915  256536 main.go:141] libmachine: (ha-347193-m02) Calling .Create
	I0920 17:59:05.923043  256536 main.go:141] libmachine: (ha-347193-m02) Creating KVM machine...
	I0920 17:59:05.924563  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found existing default KVM network
	I0920 17:59:05.924648  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found existing private KVM network mk-ha-347193
	I0920 17:59:05.924819  256536 main.go:141] libmachine: (ha-347193-m02) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02 ...
	I0920 17:59:05.924844  256536 main.go:141] libmachine: (ha-347193-m02) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 17:59:05.924904  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:05.924790  256915 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:59:05.925011  256536 main.go:141] libmachine: (ha-347193-m02) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 17:59:06.216167  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:06.216027  256915 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa...
	I0920 17:59:06.325597  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:06.325412  256915 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/ha-347193-m02.rawdisk...
	I0920 17:59:06.325640  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Writing magic tar header
	I0920 17:59:06.325658  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Writing SSH key tar header
	I0920 17:59:06.325672  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:06.325581  256915 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02 ...
	I0920 17:59:06.325689  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02
	I0920 17:59:06.325740  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02 (perms=drwx------)
	I0920 17:59:06.325762  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 17:59:06.325774  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:59:06.325786  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:59:06.325801  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 17:59:06.325822  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 17:59:06.325834  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:59:06.325857  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 17:59:06.325886  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:59:06.325897  256536 main.go:141] libmachine: (ha-347193-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:59:06.325927  256536 main.go:141] libmachine: (ha-347193-m02) Creating domain...
	I0920 17:59:06.325957  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:59:06.325971  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Checking permissions on dir: /home
	I0920 17:59:06.325982  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Skipping /home - not owner
	I0920 17:59:06.327271  256536 main.go:141] libmachine: (ha-347193-m02) define libvirt domain using xml: 
	I0920 17:59:06.327300  256536 main.go:141] libmachine: (ha-347193-m02) <domain type='kvm'>
	I0920 17:59:06.327310  256536 main.go:141] libmachine: (ha-347193-m02)   <name>ha-347193-m02</name>
	I0920 17:59:06.327317  256536 main.go:141] libmachine: (ha-347193-m02)   <memory unit='MiB'>2200</memory>
	I0920 17:59:06.327324  256536 main.go:141] libmachine: (ha-347193-m02)   <vcpu>2</vcpu>
	I0920 17:59:06.327330  256536 main.go:141] libmachine: (ha-347193-m02)   <features>
	I0920 17:59:06.327339  256536 main.go:141] libmachine: (ha-347193-m02)     <acpi/>
	I0920 17:59:06.327347  256536 main.go:141] libmachine: (ha-347193-m02)     <apic/>
	I0920 17:59:06.327356  256536 main.go:141] libmachine: (ha-347193-m02)     <pae/>
	I0920 17:59:06.327366  256536 main.go:141] libmachine: (ha-347193-m02)     
	I0920 17:59:06.327375  256536 main.go:141] libmachine: (ha-347193-m02)   </features>
	I0920 17:59:06.327386  256536 main.go:141] libmachine: (ha-347193-m02)   <cpu mode='host-passthrough'>
	I0920 17:59:06.327396  256536 main.go:141] libmachine: (ha-347193-m02)   
	I0920 17:59:06.327411  256536 main.go:141] libmachine: (ha-347193-m02)   </cpu>
	I0920 17:59:06.327426  256536 main.go:141] libmachine: (ha-347193-m02)   <os>
	I0920 17:59:06.327438  256536 main.go:141] libmachine: (ha-347193-m02)     <type>hvm</type>
	I0920 17:59:06.327452  256536 main.go:141] libmachine: (ha-347193-m02)     <boot dev='cdrom'/>
	I0920 17:59:06.327463  256536 main.go:141] libmachine: (ha-347193-m02)     <boot dev='hd'/>
	I0920 17:59:06.327471  256536 main.go:141] libmachine: (ha-347193-m02)     <bootmenu enable='no'/>
	I0920 17:59:06.327482  256536 main.go:141] libmachine: (ha-347193-m02)   </os>
	I0920 17:59:06.327490  256536 main.go:141] libmachine: (ha-347193-m02)   <devices>
	I0920 17:59:06.327501  256536 main.go:141] libmachine: (ha-347193-m02)     <disk type='file' device='cdrom'>
	I0920 17:59:06.327515  256536 main.go:141] libmachine: (ha-347193-m02)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/boot2docker.iso'/>
	I0920 17:59:06.327544  256536 main.go:141] libmachine: (ha-347193-m02)       <target dev='hdc' bus='scsi'/>
	I0920 17:59:06.327569  256536 main.go:141] libmachine: (ha-347193-m02)       <readonly/>
	I0920 17:59:06.327578  256536 main.go:141] libmachine: (ha-347193-m02)     </disk>
	I0920 17:59:06.327587  256536 main.go:141] libmachine: (ha-347193-m02)     <disk type='file' device='disk'>
	I0920 17:59:06.327597  256536 main.go:141] libmachine: (ha-347193-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:59:06.327607  256536 main.go:141] libmachine: (ha-347193-m02)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/ha-347193-m02.rawdisk'/>
	I0920 17:59:06.327619  256536 main.go:141] libmachine: (ha-347193-m02)       <target dev='hda' bus='virtio'/>
	I0920 17:59:06.327627  256536 main.go:141] libmachine: (ha-347193-m02)     </disk>
	I0920 17:59:06.327635  256536 main.go:141] libmachine: (ha-347193-m02)     <interface type='network'>
	I0920 17:59:06.327649  256536 main.go:141] libmachine: (ha-347193-m02)       <source network='mk-ha-347193'/>
	I0920 17:59:06.327659  256536 main.go:141] libmachine: (ha-347193-m02)       <model type='virtio'/>
	I0920 17:59:06.327669  256536 main.go:141] libmachine: (ha-347193-m02)     </interface>
	I0920 17:59:06.327680  256536 main.go:141] libmachine: (ha-347193-m02)     <interface type='network'>
	I0920 17:59:06.327690  256536 main.go:141] libmachine: (ha-347193-m02)       <source network='default'/>
	I0920 17:59:06.327701  256536 main.go:141] libmachine: (ha-347193-m02)       <model type='virtio'/>
	I0920 17:59:06.327711  256536 main.go:141] libmachine: (ha-347193-m02)     </interface>
	I0920 17:59:06.327722  256536 main.go:141] libmachine: (ha-347193-m02)     <serial type='pty'>
	I0920 17:59:06.327737  256536 main.go:141] libmachine: (ha-347193-m02)       <target port='0'/>
	I0920 17:59:06.327748  256536 main.go:141] libmachine: (ha-347193-m02)     </serial>
	I0920 17:59:06.327761  256536 main.go:141] libmachine: (ha-347193-m02)     <console type='pty'>
	I0920 17:59:06.327773  256536 main.go:141] libmachine: (ha-347193-m02)       <target type='serial' port='0'/>
	I0920 17:59:06.327786  256536 main.go:141] libmachine: (ha-347193-m02)     </console>
	I0920 17:59:06.327797  256536 main.go:141] libmachine: (ha-347193-m02)     <rng model='virtio'>
	I0920 17:59:06.327808  256536 main.go:141] libmachine: (ha-347193-m02)       <backend model='random'>/dev/random</backend>
	I0920 17:59:06.327819  256536 main.go:141] libmachine: (ha-347193-m02)     </rng>
	I0920 17:59:06.327825  256536 main.go:141] libmachine: (ha-347193-m02)     
	I0920 17:59:06.327833  256536 main.go:141] libmachine: (ha-347193-m02)     
	I0920 17:59:06.327840  256536 main.go:141] libmachine: (ha-347193-m02)   </devices>
	I0920 17:59:06.327847  256536 main.go:141] libmachine: (ha-347193-m02) </domain>
	I0920 17:59:06.327853  256536 main.go:141] libmachine: (ha-347193-m02) 
	I0920 17:59:06.335776  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:99:8b:51 in network default
	I0920 17:59:06.336465  256536 main.go:141] libmachine: (ha-347193-m02) Ensuring networks are active...
	I0920 17:59:06.336495  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:06.337274  256536 main.go:141] libmachine: (ha-347193-m02) Ensuring network default is active
	I0920 17:59:06.337717  256536 main.go:141] libmachine: (ha-347193-m02) Ensuring network mk-ha-347193 is active
	I0920 17:59:06.338271  256536 main.go:141] libmachine: (ha-347193-m02) Getting domain xml...
	I0920 17:59:06.339065  256536 main.go:141] libmachine: (ha-347193-m02) Creating domain...
	I0920 17:59:07.590103  256536 main.go:141] libmachine: (ha-347193-m02) Waiting to get IP...
	I0920 17:59:07.591029  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:07.591430  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:07.591465  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:07.591414  256915 retry.go:31] will retry after 226.007564ms: waiting for machine to come up
	I0920 17:59:07.819128  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:07.819593  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:07.819618  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:07.819539  256915 retry.go:31] will retry after 341.961936ms: waiting for machine to come up
	I0920 17:59:08.163271  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:08.163762  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:08.163842  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:08.163725  256915 retry.go:31] will retry after 303.677068ms: waiting for machine to come up
	I0920 17:59:08.469231  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:08.469723  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:08.469751  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:08.469670  256915 retry.go:31] will retry after 590.358913ms: waiting for machine to come up
	I0920 17:59:09.061444  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:09.061930  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:09.061952  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:09.061882  256915 retry.go:31] will retry after 511.282935ms: waiting for machine to come up
	I0920 17:59:09.574742  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:09.575187  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:09.575214  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:09.575124  256915 retry.go:31] will retry after 856.972258ms: waiting for machine to come up
	I0920 17:59:10.434260  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:10.434831  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:10.434853  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:10.434774  256915 retry.go:31] will retry after 836.344709ms: waiting for machine to come up
	I0920 17:59:11.273284  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:11.274041  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:11.274078  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:11.273981  256915 retry.go:31] will retry after 1.355754749s: waiting for machine to come up
	I0920 17:59:12.631596  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:12.631994  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:12.632021  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:12.631955  256915 retry.go:31] will retry after 1.6398171s: waiting for machine to come up
	I0920 17:59:14.273660  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:14.274139  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:14.274166  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:14.274082  256915 retry.go:31] will retry after 2.299234308s: waiting for machine to come up
	I0920 17:59:16.575079  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:16.575516  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:16.575545  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:16.575474  256915 retry.go:31] will retry after 2.142102972s: waiting for machine to come up
	I0920 17:59:18.720889  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:18.721374  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:18.721401  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:18.721344  256915 retry.go:31] will retry after 2.537816732s: waiting for machine to come up
	I0920 17:59:21.261045  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:21.261472  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:21.261500  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:21.261409  256915 retry.go:31] will retry after 3.610609319s: waiting for machine to come up
	I0920 17:59:24.876357  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:24.876860  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find current IP address of domain ha-347193-m02 in network mk-ha-347193
	I0920 17:59:24.876882  256536 main.go:141] libmachine: (ha-347193-m02) DBG | I0920 17:59:24.876825  256915 retry.go:31] will retry after 4.700561987s: waiting for machine to come up
	I0920 17:59:29.581568  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.582102  256536 main.go:141] libmachine: (ha-347193-m02) Found IP for machine: 192.168.39.241
	I0920 17:59:29.582125  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has current primary IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.582131  256536 main.go:141] libmachine: (ha-347193-m02) Reserving static IP address...
	I0920 17:59:29.582608  256536 main.go:141] libmachine: (ha-347193-m02) DBG | unable to find host DHCP lease matching {name: "ha-347193-m02", mac: "52:54:00:2a:a9:ec", ip: "192.168.39.241"} in network mk-ha-347193
	I0920 17:59:29.662003  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Getting to WaitForSSH function...
	I0920 17:59:29.662037  256536 main.go:141] libmachine: (ha-347193-m02) Reserved static IP address: 192.168.39.241
	I0920 17:59:29.662058  256536 main.go:141] libmachine: (ha-347193-m02) Waiting for SSH to be available...
	I0920 17:59:29.666033  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.666545  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:29.666582  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.666603  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Using SSH client type: external
	I0920 17:59:29.666618  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa (-rw-------)
	I0920 17:59:29.666652  256536 main.go:141] libmachine: (ha-347193-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:59:29.666668  256536 main.go:141] libmachine: (ha-347193-m02) DBG | About to run SSH command:
	I0920 17:59:29.666675  256536 main.go:141] libmachine: (ha-347193-m02) DBG | exit 0
	I0920 17:59:29.794185  256536 main.go:141] libmachine: (ha-347193-m02) DBG | SSH cmd err, output: <nil>: 
	I0920 17:59:29.794474  256536 main.go:141] libmachine: (ha-347193-m02) KVM machine creation complete!
	I0920 17:59:29.794737  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetConfigRaw
	I0920 17:59:29.795327  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:29.795609  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:29.795784  256536 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:59:29.795797  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetState
	I0920 17:59:29.797225  256536 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:59:29.797243  256536 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:59:29.797249  256536 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:59:29.797255  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:29.799913  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.800263  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:29.800285  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.800414  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:29.800599  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:29.800763  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:29.800897  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:29.801057  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:29.801269  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:29.801282  256536 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:59:29.909222  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:59:29.909246  256536 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:59:29.909255  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:29.912190  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.912743  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:29.912765  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:29.913023  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:29.913242  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:29.913432  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:29.913591  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:29.913750  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:29.913984  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:29.913999  256536 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:59:30.022466  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:59:30.022546  256536 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:59:30.022558  256536 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:59:30.022572  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetMachineName
	I0920 17:59:30.022864  256536 buildroot.go:166] provisioning hostname "ha-347193-m02"
	I0920 17:59:30.022888  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetMachineName
	I0920 17:59:30.023065  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.025530  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.025878  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.025926  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.026023  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.026228  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.026416  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.026576  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.026730  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:30.026894  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:30.026904  256536 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-347193-m02 && echo "ha-347193-m02" | sudo tee /etc/hostname
	I0920 17:59:30.147982  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-347193-m02
	
	I0920 17:59:30.148028  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.151033  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.151386  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.151409  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.151586  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.151765  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.151945  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.152170  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.152401  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:30.152590  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:30.152607  256536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-347193-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-347193-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-347193-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:59:30.271015  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:59:30.271057  256536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 17:59:30.271078  256536 buildroot.go:174] setting up certificates
	I0920 17:59:30.271087  256536 provision.go:84] configureAuth start
	I0920 17:59:30.271097  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetMachineName
	I0920 17:59:30.271410  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetIP
	I0920 17:59:30.273849  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.274342  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.274365  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.274563  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.277006  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.277328  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.277355  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.277454  256536 provision.go:143] copyHostCerts
	I0920 17:59:30.277493  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 17:59:30.277528  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 17:59:30.277538  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 17:59:30.277621  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 17:59:30.277724  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 17:59:30.277753  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 17:59:30.277763  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 17:59:30.277802  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 17:59:30.277864  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 17:59:30.277886  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 17:59:30.277894  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 17:59:30.277955  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 17:59:30.278028  256536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.ha-347193-m02 san=[127.0.0.1 192.168.39.241 ha-347193-m02 localhost minikube]
	I0920 17:59:30.390911  256536 provision.go:177] copyRemoteCerts
	I0920 17:59:30.390984  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:59:30.391016  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.394282  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.394669  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.394705  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.394848  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.395053  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.395190  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.395311  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa Username:docker}
	I0920 17:59:30.480101  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 17:59:30.480183  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:59:30.504430  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 17:59:30.504533  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:59:30.532508  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 17:59:30.532609  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 17:59:30.555072  256536 provision.go:87] duration metric: took 283.968068ms to configureAuth
	I0920 17:59:30.555106  256536 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:59:30.555298  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:59:30.555382  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.558201  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.558658  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.558688  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.558891  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.559083  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.559260  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.559393  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.559554  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:30.559783  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:30.559809  256536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 17:59:30.779495  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 17:59:30.779542  256536 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:59:30.779553  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetURL
	I0920 17:59:30.780879  256536 main.go:141] libmachine: (ha-347193-m02) DBG | Using libvirt version 6000000
	I0920 17:59:30.782959  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.783290  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.783321  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.783453  256536 main.go:141] libmachine: Docker is up and running!
	I0920 17:59:30.783468  256536 main.go:141] libmachine: Reticulating splines...
	I0920 17:59:30.783477  256536 client.go:171] duration metric: took 24.8615738s to LocalClient.Create
	I0920 17:59:30.783506  256536 start.go:167] duration metric: took 24.861646798s to libmachine.API.Create "ha-347193"
	I0920 17:59:30.783518  256536 start.go:293] postStartSetup for "ha-347193-m02" (driver="kvm2")
	I0920 17:59:30.783531  256536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:59:30.783550  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:30.783789  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:59:30.783813  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.786027  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.786349  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.786370  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.786628  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.786815  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.786993  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.787118  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa Username:docker}
	I0920 17:59:30.872345  256536 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:59:30.876519  256536 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:59:30.876550  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 17:59:30.876627  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 17:59:30.876702  256536 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 17:59:30.876712  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /etc/ssl/certs/2448492.pem
	I0920 17:59:30.876794  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:59:30.886441  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 17:59:30.909455  256536 start.go:296] duration metric: took 125.914203ms for postStartSetup
	I0920 17:59:30.909530  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetConfigRaw
	I0920 17:59:30.910141  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetIP
	I0920 17:59:30.912668  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.912976  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.913008  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.913233  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 17:59:30.913434  256536 start.go:128] duration metric: took 25.011535523s to createHost
	I0920 17:59:30.913460  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:30.915700  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.915987  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:30.916010  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:30.916226  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:30.916424  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.916603  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:30.916761  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:30.916950  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 17:59:30.917155  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0920 17:59:30.917166  256536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:59:31.026461  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855170.997739673
	
	I0920 17:59:31.026489  256536 fix.go:216] guest clock: 1726855170.997739673
	I0920 17:59:31.026496  256536 fix.go:229] Guest: 2024-09-20 17:59:30.997739673 +0000 UTC Remote: 2024-09-20 17:59:30.913448056 +0000 UTC m=+71.229940404 (delta=84.291617ms)
	I0920 17:59:31.026512  256536 fix.go:200] guest clock delta is within tolerance: 84.291617ms
	I0920 17:59:31.026517  256536 start.go:83] releasing machines lock for "ha-347193-m02", held for 25.124707242s
	I0920 17:59:31.026538  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:31.026839  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetIP
	I0920 17:59:31.029757  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.030179  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:31.030206  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.032445  256536 out.go:177] * Found network options:
	I0920 17:59:31.034196  256536 out.go:177]   - NO_PROXY=192.168.39.246
	W0920 17:59:31.035224  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:59:31.035267  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:31.035792  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:31.035991  256536 main.go:141] libmachine: (ha-347193-m02) Calling .DriverName
	I0920 17:59:31.036100  256536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:59:31.036143  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	W0920 17:59:31.036175  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 17:59:31.036267  256536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 17:59:31.036294  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHHostname
	I0920 17:59:31.039153  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.039466  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.039563  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:31.039596  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.039727  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:31.039878  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:31.039897  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:31.039909  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:31.040048  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:31.040104  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHPort
	I0920 17:59:31.040219  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa Username:docker}
	I0920 17:59:31.040318  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHKeyPath
	I0920 17:59:31.040480  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetSSHUsername
	I0920 17:59:31.040634  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m02/id_rsa Username:docker}
	I0920 17:59:31.274255  256536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:59:31.280374  256536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:59:31.280441  256536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:59:31.296955  256536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:59:31.296987  256536 start.go:495] detecting cgroup driver to use...
	I0920 17:59:31.297127  256536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:59:31.313543  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:59:31.328017  256536 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:59:31.328096  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:59:31.341962  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:59:31.355931  256536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:59:31.467597  256536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:59:31.622972  256536 docker.go:233] disabling docker service ...
	I0920 17:59:31.623069  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:59:31.637011  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:59:31.649605  256536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:59:31.771555  256536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:59:31.885423  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:59:31.898889  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:59:31.916477  256536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 17:59:31.916540  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.926444  256536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 17:59:31.926525  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.937116  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.947355  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.957415  256536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:59:31.968385  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.979172  256536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:31.996319  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 17:59:32.006541  256536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:59:32.015815  256536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 17:59:32.015883  256536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 17:59:32.028240  256536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:59:32.037972  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:59:32.152278  256536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 17:59:32.246123  256536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 17:59:32.246218  256536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 17:59:32.251023  256536 start.go:563] Will wait 60s for crictl version
	I0920 17:59:32.251119  256536 ssh_runner.go:195] Run: which crictl
	I0920 17:59:32.254625  256536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:59:32.289498  256536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 17:59:32.289579  256536 ssh_runner.go:195] Run: crio --version
	I0920 17:59:32.316659  256536 ssh_runner.go:195] Run: crio --version
	I0920 17:59:32.344869  256536 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 17:59:32.346085  256536 out.go:177]   - env NO_PROXY=192.168.39.246
	I0920 17:59:32.347420  256536 main.go:141] libmachine: (ha-347193-m02) Calling .GetIP
	I0920 17:59:32.350776  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:32.351141  256536 main.go:141] libmachine: (ha-347193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:a9:ec", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:59:20 +0000 UTC Type:0 Mac:52:54:00:2a:a9:ec Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-347193-m02 Clientid:01:52:54:00:2a:a9:ec}
	I0920 17:59:32.351172  256536 main.go:141] libmachine: (ha-347193-m02) DBG | domain ha-347193-m02 has defined IP address 192.168.39.241 and MAC address 52:54:00:2a:a9:ec in network mk-ha-347193
	I0920 17:59:32.351449  256536 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:59:32.355587  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:59:32.367465  256536 mustload.go:65] Loading cluster: ha-347193
	I0920 17:59:32.367713  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:59:32.368030  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:32.368075  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:32.383118  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0920 17:59:32.383676  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:32.384195  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:32.384214  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:32.384600  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:32.384841  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 17:59:32.386464  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 17:59:32.386753  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:32.386789  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:32.402199  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35537
	I0920 17:59:32.402698  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:32.403237  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:32.403260  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:32.403569  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:32.403791  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:59:32.403932  256536 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193 for IP: 192.168.39.241
	I0920 17:59:32.403945  256536 certs.go:194] generating shared ca certs ...
	I0920 17:59:32.403966  256536 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:59:32.404125  256536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 17:59:32.404172  256536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 17:59:32.404185  256536 certs.go:256] generating profile certs ...
	I0920 17:59:32.404277  256536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key
	I0920 17:59:32.404313  256536 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.32ffe274
	I0920 17:59:32.404333  256536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.32ffe274 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.241 192.168.39.254]
	I0920 17:59:32.510440  256536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.32ffe274 ...
	I0920 17:59:32.510475  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.32ffe274: {Name:mkc30548db6e83d8832ed460ef3ecdc3101e5f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:59:32.510691  256536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.32ffe274 ...
	I0920 17:59:32.510711  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.32ffe274: {Name:mk355121b8c4a956d860782a1b0c1370e7e6b83b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:59:32.510815  256536 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.32ffe274 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt
	I0920 17:59:32.510982  256536 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.32ffe274 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key
	I0920 17:59:32.511155  256536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key
	I0920 17:59:32.511179  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 17:59:32.511194  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 17:59:32.511205  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:59:32.511220  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:59:32.511234  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 17:59:32.511253  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 17:59:32.511269  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 17:59:32.511287  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 17:59:32.511357  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 17:59:32.511396  256536 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 17:59:32.511405  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:59:32.511438  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 17:59:32.511471  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:59:32.511501  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 17:59:32.511554  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 17:59:32.511594  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /usr/share/ca-certificates/2448492.pem
	I0920 17:59:32.511618  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:59:32.511638  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem -> /usr/share/ca-certificates/244849.pem
	I0920 17:59:32.511683  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:59:32.515008  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:32.515405  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:59:32.515433  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:32.515642  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:59:32.515847  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:59:32.515999  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:59:32.516117  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:59:32.590305  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 17:59:32.595442  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 17:59:32.607284  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 17:59:32.611399  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0920 17:59:32.622339  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 17:59:32.626371  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 17:59:32.636850  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 17:59:32.640553  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0920 17:59:32.651329  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 17:59:32.655163  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 17:59:32.666449  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 17:59:32.670985  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0920 17:59:32.681916  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:59:32.706099  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:59:32.733293  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:59:32.756993  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 17:59:32.781045  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 17:59:32.804602  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 17:59:32.829390  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:59:32.854727  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:59:32.878575  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 17:59:32.902198  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:59:32.926004  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 17:59:32.950687  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 17:59:32.966783  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0920 17:59:32.982858  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 17:59:32.998897  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0920 17:59:33.015096  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 17:59:33.030999  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0920 17:59:33.046670  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 17:59:33.063118  256536 ssh_runner.go:195] Run: openssl version
	I0920 17:59:33.068899  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 17:59:33.079939  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 17:59:33.084424  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 17:59:33.084485  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 17:59:33.090249  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:59:33.100697  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:59:33.111242  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:59:33.115679  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:59:33.115779  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:59:33.121728  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:59:33.132827  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 17:59:33.144204  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 17:59:33.148909  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 17:59:33.149013  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 17:59:33.155176  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 17:59:33.167680  256536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:59:33.171844  256536 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:59:33.171909  256536 kubeadm.go:934] updating node {m02 192.168.39.241 8443 v1.31.1 crio true true} ...
	I0920 17:59:33.172010  256536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-347193-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:59:33.172048  256536 kube-vip.go:115] generating kube-vip config ...
	I0920 17:59:33.172096  256536 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 17:59:33.188452  256536 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 17:59:33.188534  256536 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 17:59:33.188596  256536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:59:33.200215  256536 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 17:59:33.200283  256536 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 17:59:33.211876  256536 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 17:59:33.211910  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:59:33.211977  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:59:33.211977  256536 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0920 17:59:33.211976  256536 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0920 17:59:33.216444  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 17:59:33.216484  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 17:59:34.138597  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:59:34.138688  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:59:34.143879  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 17:59:34.143926  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 17:59:34.359690  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:59:34.385444  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:59:34.385565  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:59:34.390030  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 17:59:34.390071  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 17:59:34.700597  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 17:59:34.710043  256536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 17:59:34.726628  256536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:59:34.743032  256536 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 17:59:34.758894  256536 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 17:59:34.762912  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:59:34.775241  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:59:34.903828  256536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:59:34.920877  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 17:59:34.921370  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:59:34.921427  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:59:34.936803  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
	I0920 17:59:34.937329  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:59:34.937858  256536 main.go:141] libmachine: Using API Version  1
	I0920 17:59:34.937878  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:59:34.938232  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:59:34.938485  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 17:59:34.938651  256536 start.go:317] joinCluster: &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:59:34.938783  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 17:59:34.938806  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 17:59:34.942213  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:34.942681  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 17:59:34.942710  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 17:59:34.942970  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 17:59:34.943133  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 17:59:34.943329  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 17:59:34.943450  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 17:59:35.091635  256536 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:59:35.091698  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u7ake0.3opk6636yb6nqfez --discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-347193-m02 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443"
	I0920 17:59:58.407521  256536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u7ake0.3opk6636yb6nqfez --discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-347193-m02 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443": (23.315793188s)
	I0920 17:59:58.407571  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 17:59:58.935865  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-347193-m02 minikube.k8s.io/updated_at=2024_09_20T17_59_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=ha-347193 minikube.k8s.io/primary=false
	I0920 17:59:59.078065  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-347193-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 17:59:59.202785  256536 start.go:319] duration metric: took 24.264127262s to joinCluster
	I0920 17:59:59.202881  256536 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 17:59:59.203156  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:59:59.204855  256536 out.go:177] * Verifying Kubernetes components...
	I0920 17:59:59.206648  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:59:59.459291  256536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:59:59.534641  256536 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:59:59.534924  256536 kapi.go:59] client config for ha-347193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key", CAFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 17:59:59.534997  256536 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I0920 17:59:59.535231  256536 node_ready.go:35] waiting up to 6m0s for node "ha-347193-m02" to be "Ready" ...
	I0920 17:59:59.535334  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 17:59:59.535343  256536 round_trippers.go:469] Request Headers:
	I0920 17:59:59.535354  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 17:59:59.535362  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 17:59:59.550229  256536 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0920 18:00:00.035883  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:00.035909  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:00.035928  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:00.035932  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:00.046596  256536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0920 18:00:00.535658  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:00.535691  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:00.535702  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:00.535709  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:00.541409  256536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:00:01.035971  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:01.036006  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:01.036018  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:01.036024  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:01.040150  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:01.536089  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:01.536113  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:01.536123  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:01.536128  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:01.540239  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:01.540746  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:02.036207  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:02.036234  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:02.036250  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:02.036253  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:02.040514  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:02.535543  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:02.535572  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:02.535585  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:02.535591  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:02.541651  256536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 18:00:03.035563  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:03.035589  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:03.035598  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:03.035606  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:03.039108  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:03.535979  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:03.536001  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:03.536009  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:03.536019  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:03.539926  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:04.035710  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:04.035734  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:04.035743  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:04.035746  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:04.039659  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:04.040156  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:04.535537  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:04.535559  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:04.535572  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:04.535575  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:04.540040  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:05.036185  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:05.036211  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:05.036222  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:05.036229  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:05.040132  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:05.536445  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:05.536515  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:05.536529  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:05.536535  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:05.539954  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:06.036190  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:06.036217  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:06.036228  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:06.036235  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:06.039984  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:06.040529  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:06.535732  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:06.535756  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:06.535765  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:06.535769  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:06.539264  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:07.036241  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:07.036266  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:07.036274  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:07.036278  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:07.040942  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:07.535952  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:07.535977  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:07.535986  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:07.535989  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:07.539355  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:08.036196  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:08.036223  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:08.036231  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:08.036235  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:08.039851  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:08.535561  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:08.535589  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:08.535603  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:08.535609  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:08.540000  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:08.540484  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:09.035653  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:09.035683  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:09.035692  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:09.035695  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:09.039339  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:09.536386  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:09.536410  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:09.536421  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:09.536427  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:09.539675  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:10.036302  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:10.036335  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:10.036347  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:10.036352  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:10.039818  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:10.535749  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:10.535778  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:10.535787  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:10.535792  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:10.539640  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:11.036020  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:11.036050  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:11.036060  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:11.036066  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:11.039525  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:11.040266  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:11.535666  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:11.535691  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:11.535697  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:11.535700  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:11.538988  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:12.036243  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:12.036277  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:12.036285  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:12.036289  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:12.040685  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:12.535894  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:12.535923  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:12.535931  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:12.535936  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:12.539877  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:13.036023  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:13.036052  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:13.036062  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:13.036068  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:13.039752  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:13.040483  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:13.535855  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:13.535883  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:13.535894  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:13.535899  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:13.539399  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:14.036503  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:14.036530  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:14.036539  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:14.036542  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:14.040297  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:14.536446  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:14.536477  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:14.536489  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:14.536496  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:14.539974  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:15.036448  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:15.036478  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:15.036489  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:15.036495  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:15.040620  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:15.041167  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:15.535516  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:15.535545  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:15.535553  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:15.535559  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:15.539083  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:16.036510  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:16.036537  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:16.036546  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:16.036549  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:16.041085  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:16.535826  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:16.535849  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:16.535861  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:16.535865  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:16.539059  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:17.036117  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:17.036144  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:17.036153  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:17.036160  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:17.040478  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:17.535518  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:17.535543  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:17.535552  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:17.535556  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:17.540491  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:17.541065  256536 node_ready.go:53] node "ha-347193-m02" has status "Ready":"False"
	I0920 18:00:18.035427  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:18.035454  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.035462  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.035467  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.039556  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:18.040741  256536 node_ready.go:49] node "ha-347193-m02" has status "Ready":"True"
	I0920 18:00:18.040773  256536 node_ready.go:38] duration metric: took 18.505523491s for node "ha-347193-m02" to be "Ready" ...
	I0920 18:00:18.040784  256536 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:00:18.040932  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:00:18.040941  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.040957  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.040962  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.046873  256536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:00:18.054373  256536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.054477  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6llmd
	I0920 18:00:18.054485  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.054492  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.054496  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.058597  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:18.060016  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:18.060034  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.060042  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.060047  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.062721  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.063302  256536 pod_ready.go:93] pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.063326  256536 pod_ready.go:82] duration metric: took 8.921017ms for pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.063339  256536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.063419  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bkmhn
	I0920 18:00:18.063429  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.063437  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.063442  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.065673  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.066345  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:18.066361  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.066368  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.066372  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.068535  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.068957  256536 pod_ready.go:93] pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.068975  256536 pod_ready.go:82] duration metric: took 5.629047ms for pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.068985  256536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.069042  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193
	I0920 18:00:18.069050  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.069058  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.069064  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.071215  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.071725  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:18.071741  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.071748  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.071752  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.076248  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:18.076783  256536 pod_ready.go:93] pod "etcd-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.076809  256536 pod_ready.go:82] duration metric: took 7.814986ms for pod "etcd-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.076822  256536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.076903  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193-m02
	I0920 18:00:18.076913  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.076933  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.076942  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.079425  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.080041  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:18.080062  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.080070  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.080073  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.082658  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:00:18.083080  256536 pod_ready.go:93] pod "etcd-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.083098  256536 pod_ready.go:82] duration metric: took 6.269137ms for pod "etcd-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.083120  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.235451  256536 request.go:632] Waited for 152.265053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193
	I0920 18:00:18.235515  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193
	I0920 18:00:18.235520  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.235529  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.235538  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.239325  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:18.436436  256536 request.go:632] Waited for 196.38005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:18.436497  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:18.436502  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.436510  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.436513  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.439995  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:18.440920  256536 pod_ready.go:93] pod "kube-apiserver-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.440944  256536 pod_ready.go:82] duration metric: took 357.817605ms for pod "kube-apiserver-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.440954  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.636140  256536 request.go:632] Waited for 195.087959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m02
	I0920 18:00:18.636243  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m02
	I0920 18:00:18.636255  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.636268  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.636280  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.640087  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:18.836246  256536 request.go:632] Waited for 195.361959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:18.836311  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:18.836316  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:18.836323  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:18.836328  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:18.840653  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:18.841777  256536 pod_ready.go:93] pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:18.841799  256536 pod_ready.go:82] duration metric: took 400.83724ms for pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:18.841809  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:19.036009  256536 request.go:632] Waited for 194.129324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193
	I0920 18:00:19.036093  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193
	I0920 18:00:19.036098  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:19.036106  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:19.036111  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:19.039737  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:19.236270  256536 request.go:632] Waited for 195.455754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:19.236346  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:19.236354  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:19.236365  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:19.236373  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:19.241800  256536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:00:19.242348  256536 pod_ready.go:93] pod "kube-controller-manager-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:19.242373  256536 pod_ready.go:82] duration metric: took 400.554651ms for pod "kube-controller-manager-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:19.242385  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:19.436357  256536 request.go:632] Waited for 193.884621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m02
	I0920 18:00:19.436449  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m02
	I0920 18:00:19.436463  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:19.436474  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:19.436485  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:19.446510  256536 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0920 18:00:19.635563  256536 request.go:632] Waited for 188.301909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:19.635648  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:19.635653  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:19.635661  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:19.635665  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:19.639157  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:19.639875  256536 pod_ready.go:93] pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:19.639909  256536 pod_ready.go:82] duration metric: took 397.513343ms for pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:19.639925  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ffdvq" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:19.836481  256536 request.go:632] Waited for 196.456867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ffdvq
	I0920 18:00:19.836549  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ffdvq
	I0920 18:00:19.836555  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:19.836563  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:19.836568  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:19.840480  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:20.036151  256536 request.go:632] Waited for 194.863834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:20.036217  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:20.036230  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:20.036238  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:20.036242  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:20.040324  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:20.040897  256536 pod_ready.go:93] pod "kube-proxy-ffdvq" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:20.040926  256536 pod_ready.go:82] duration metric: took 400.990573ms for pod "kube-proxy-ffdvq" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:20.040940  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rdqkg" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:20.235885  256536 request.go:632] Waited for 194.862598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdqkg
	I0920 18:00:20.235966  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdqkg
	I0920 18:00:20.235973  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:20.235983  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:20.235989  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:20.239847  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:20.436319  256536 request.go:632] Waited for 195.461517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:20.436386  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:20.436391  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:20.436399  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:20.436403  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:20.440218  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:20.440901  256536 pod_ready.go:93] pod "kube-proxy-rdqkg" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:20.440935  256536 pod_ready.go:82] duration metric: took 399.983159ms for pod "kube-proxy-rdqkg" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:20.440946  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:20.636078  256536 request.go:632] Waited for 195.028076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193
	I0920 18:00:20.636162  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193
	I0920 18:00:20.636181  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:20.636193  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:20.636206  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:20.639813  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:20.835867  256536 request.go:632] Waited for 195.433474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:20.835962  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:00:20.835968  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:20.835976  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:20.835982  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:20.839792  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:20.840650  256536 pod_ready.go:93] pod "kube-scheduler-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:20.840681  256536 pod_ready.go:82] duration metric: took 399.725704ms for pod "kube-scheduler-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:20.840695  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:21.036247  256536 request.go:632] Waited for 195.4677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m02
	I0920 18:00:21.036330  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m02
	I0920 18:00:21.036335  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.036344  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.036348  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.040845  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:21.235815  256536 request.go:632] Waited for 194.360469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:21.235904  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:00:21.235911  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.235921  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.235928  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.239741  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:21.240157  256536 pod_ready.go:93] pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:00:21.240181  256536 pod_ready.go:82] duration metric: took 399.476235ms for pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:00:21.240195  256536 pod_ready.go:39] duration metric: took 3.199359276s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:00:21.240216  256536 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:00:21.240276  256536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:00:21.258549  256536 api_server.go:72] duration metric: took 22.055620378s to wait for apiserver process to appear ...
	I0920 18:00:21.258580  256536 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:00:21.258610  256536 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I0920 18:00:21.263626  256536 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I0920 18:00:21.263706  256536 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I0920 18:00:21.263711  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.263719  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.263724  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.265005  256536 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0920 18:00:21.265129  256536 api_server.go:141] control plane version: v1.31.1
	I0920 18:00:21.265148  256536 api_server.go:131] duration metric: took 6.561205ms to wait for apiserver health ...
	I0920 18:00:21.265155  256536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:00:21.435532  256536 request.go:632] Waited for 170.291625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:00:21.435621  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:00:21.435628  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.435636  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.435639  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.442020  256536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 18:00:21.446425  256536 system_pods.go:59] 17 kube-system pods found
	I0920 18:00:21.446458  256536 system_pods.go:61] "coredns-7c65d6cfc9-6llmd" [8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92] Running
	I0920 18:00:21.446463  256536 system_pods.go:61] "coredns-7c65d6cfc9-bkmhn" [f7862a6e-54cc-450c-b283-d20fb99f51ce] Running
	I0920 18:00:21.446467  256536 system_pods.go:61] "etcd-ha-347193" [e13fc198-b02b-4f0a-bf76-be0f519d9d57] Running
	I0920 18:00:21.446470  256536 system_pods.go:61] "etcd-ha-347193-m02" [4ea69953-b35a-4ae9-8153-cea3be5e2c1c] Running
	I0920 18:00:21.446473  256536 system_pods.go:61] "kindnet-cqbxl" [3d49a6b1-5be5-4d96-98e3-bd05035a2d1b] Running
	I0920 18:00:21.446478  256536 system_pods.go:61] "kindnet-z24zp" [9271d251-2d95-4b23-85f3-7da6567b2fc3] Running
	I0920 18:00:21.446482  256536 system_pods.go:61] "kube-apiserver-ha-347193" [993ccf05-a39a-42b4-b82d-936531325dc4] Running
	I0920 18:00:21.446485  256536 system_pods.go:61] "kube-apiserver-ha-347193-m02" [43cd77b8-8925-4a04-a8cf-1b9a0cbbc502] Running
	I0920 18:00:21.446489  256536 system_pods.go:61] "kube-controller-manager-ha-347193" [6de3a14b-6587-45d4-aaee-1256b9c327cc] Running
	I0920 18:00:21.446492  256536 system_pods.go:61] "kube-controller-manager-ha-347193-m02" [cdf3f4d7-0675-4c59-8ad5-8901104d71c3] Running
	I0920 18:00:21.446495  256536 system_pods.go:61] "kube-proxy-ffdvq" [97120f62-0af2-405a-b8ff-639c72a39a2d] Running
	I0920 18:00:21.446500  256536 system_pods.go:61] "kube-proxy-rdqkg" [d9ae4e37-b29b-400a-af2d-544da4024069] Running
	I0920 18:00:21.446502  256536 system_pods.go:61] "kube-scheduler-ha-347193" [910baa0e-404e-4ac7-9262-848672eaf9cf] Running
	I0920 18:00:21.446505  256536 system_pods.go:61] "kube-scheduler-ha-347193-m02" [623b9c3b-b998-4516-a53e-17e9d8970594] Running
	I0920 18:00:21.446508  256536 system_pods.go:61] "kube-vip-ha-347193" [20d6faa4-600f-4bd0-8acb-1f95c047da58] Running
	I0920 18:00:21.446511  256536 system_pods.go:61] "kube-vip-ha-347193-m02" [1455826c-7b3d-40f7-bb15-a9861ee95e19] Running
	I0920 18:00:21.446516  256536 system_pods.go:61] "storage-provisioner" [8924f7ce-85a0-4587-9c05-8a74c7113e9e] Running
	I0920 18:00:21.446521  256536 system_pods.go:74] duration metric: took 181.36053ms to wait for pod list to return data ...
	I0920 18:00:21.446528  256536 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:00:21.636065  256536 request.go:632] Waited for 189.405126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:00:21.636135  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:00:21.636141  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.636148  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.636153  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.640839  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:00:21.641122  256536 default_sa.go:45] found service account: "default"
	I0920 18:00:21.641142  256536 default_sa.go:55] duration metric: took 194.607217ms for default service account to be created ...
	I0920 18:00:21.641151  256536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:00:21.835580  256536 request.go:632] Waited for 194.337083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:00:21.835675  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:00:21.835682  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:21.835689  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:21.835693  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:21.841225  256536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:00:21.846004  256536 system_pods.go:86] 17 kube-system pods found
	I0920 18:00:21.846039  256536 system_pods.go:89] "coredns-7c65d6cfc9-6llmd" [8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92] Running
	I0920 18:00:21.846046  256536 system_pods.go:89] "coredns-7c65d6cfc9-bkmhn" [f7862a6e-54cc-450c-b283-d20fb99f51ce] Running
	I0920 18:00:21.846051  256536 system_pods.go:89] "etcd-ha-347193" [e13fc198-b02b-4f0a-bf76-be0f519d9d57] Running
	I0920 18:00:21.846055  256536 system_pods.go:89] "etcd-ha-347193-m02" [4ea69953-b35a-4ae9-8153-cea3be5e2c1c] Running
	I0920 18:00:21.846059  256536 system_pods.go:89] "kindnet-cqbxl" [3d49a6b1-5be5-4d96-98e3-bd05035a2d1b] Running
	I0920 18:00:21.846062  256536 system_pods.go:89] "kindnet-z24zp" [9271d251-2d95-4b23-85f3-7da6567b2fc3] Running
	I0920 18:00:21.846066  256536 system_pods.go:89] "kube-apiserver-ha-347193" [993ccf05-a39a-42b4-b82d-936531325dc4] Running
	I0920 18:00:21.846070  256536 system_pods.go:89] "kube-apiserver-ha-347193-m02" [43cd77b8-8925-4a04-a8cf-1b9a0cbbc502] Running
	I0920 18:00:21.846074  256536 system_pods.go:89] "kube-controller-manager-ha-347193" [6de3a14b-6587-45d4-aaee-1256b9c327cc] Running
	I0920 18:00:21.846078  256536 system_pods.go:89] "kube-controller-manager-ha-347193-m02" [cdf3f4d7-0675-4c59-8ad5-8901104d71c3] Running
	I0920 18:00:21.846082  256536 system_pods.go:89] "kube-proxy-ffdvq" [97120f62-0af2-405a-b8ff-639c72a39a2d] Running
	I0920 18:00:21.846085  256536 system_pods.go:89] "kube-proxy-rdqkg" [d9ae4e37-b29b-400a-af2d-544da4024069] Running
	I0920 18:00:21.846089  256536 system_pods.go:89] "kube-scheduler-ha-347193" [910baa0e-404e-4ac7-9262-848672eaf9cf] Running
	I0920 18:00:21.846093  256536 system_pods.go:89] "kube-scheduler-ha-347193-m02" [623b9c3b-b998-4516-a53e-17e9d8970594] Running
	I0920 18:00:21.846097  256536 system_pods.go:89] "kube-vip-ha-347193" [20d6faa4-600f-4bd0-8acb-1f95c047da58] Running
	I0920 18:00:21.846108  256536 system_pods.go:89] "kube-vip-ha-347193-m02" [1455826c-7b3d-40f7-bb15-a9861ee95e19] Running
	I0920 18:00:21.846111  256536 system_pods.go:89] "storage-provisioner" [8924f7ce-85a0-4587-9c05-8a74c7113e9e] Running
	I0920 18:00:21.846118  256536 system_pods.go:126] duration metric: took 204.961033ms to wait for k8s-apps to be running ...
	I0920 18:00:21.846127  256536 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:00:21.846175  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:00:21.862644  256536 system_svc.go:56] duration metric: took 16.499746ms WaitForService to wait for kubelet
	I0920 18:00:21.862683  256536 kubeadm.go:582] duration metric: took 22.659763297s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:00:21.862708  256536 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:00:22.036245  256536 request.go:632] Waited for 173.422886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I0920 18:00:22.036330  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I0920 18:00:22.036338  256536 round_trippers.go:469] Request Headers:
	I0920 18:00:22.036349  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:00:22.036357  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:00:22.040138  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:00:22.040911  256536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:00:22.040940  256536 node_conditions.go:123] node cpu capacity is 2
	I0920 18:00:22.040957  256536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:00:22.040962  256536 node_conditions.go:123] node cpu capacity is 2
	I0920 18:00:22.040967  256536 node_conditions.go:105] duration metric: took 178.253105ms to run NodePressure ...
	I0920 18:00:22.040983  256536 start.go:241] waiting for startup goroutines ...
	I0920 18:00:22.041015  256536 start.go:255] writing updated cluster config ...
	I0920 18:00:22.043512  256536 out.go:201] 
	I0920 18:00:22.045235  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:00:22.045367  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 18:00:22.047395  256536 out.go:177] * Starting "ha-347193-m03" control-plane node in "ha-347193" cluster
	I0920 18:00:22.048977  256536 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:00:22.049012  256536 cache.go:56] Caching tarball of preloaded images
	I0920 18:00:22.049136  256536 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:00:22.049148  256536 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:00:22.049248  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 18:00:22.049435  256536 start.go:360] acquireMachinesLock for ha-347193-m03: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:00:22.049481  256536 start.go:364] duration metric: took 26µs to acquireMachinesLock for "ha-347193-m03"
	I0920 18:00:22.049501  256536 start.go:93] Provisioning new machine with config: &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:00:22.049631  256536 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0920 18:00:22.051727  256536 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:00:22.051867  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:00:22.051912  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:00:22.067720  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40165
	I0920 18:00:22.068325  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:00:22.068884  256536 main.go:141] libmachine: Using API Version  1
	I0920 18:00:22.068907  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:00:22.069270  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:00:22.069481  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetMachineName
	I0920 18:00:22.069638  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:22.069845  256536 start.go:159] libmachine.API.Create for "ha-347193" (driver="kvm2")
	I0920 18:00:22.069873  256536 client.go:168] LocalClient.Create starting
	I0920 18:00:22.069933  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 18:00:22.069978  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 18:00:22.069993  256536 main.go:141] libmachine: Parsing certificate...
	I0920 18:00:22.070053  256536 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 18:00:22.070073  256536 main.go:141] libmachine: Decoding PEM data...
	I0920 18:00:22.070084  256536 main.go:141] libmachine: Parsing certificate...
	I0920 18:00:22.070099  256536 main.go:141] libmachine: Running pre-create checks...
	I0920 18:00:22.070107  256536 main.go:141] libmachine: (ha-347193-m03) Calling .PreCreateCheck
	I0920 18:00:22.070282  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetConfigRaw
	I0920 18:00:22.070730  256536 main.go:141] libmachine: Creating machine...
	I0920 18:00:22.070742  256536 main.go:141] libmachine: (ha-347193-m03) Calling .Create
	I0920 18:00:22.070908  256536 main.go:141] libmachine: (ha-347193-m03) Creating KVM machine...
	I0920 18:00:22.072409  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found existing default KVM network
	I0920 18:00:22.072583  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found existing private KVM network mk-ha-347193
	I0920 18:00:22.072739  256536 main.go:141] libmachine: (ha-347193-m03) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03 ...
	I0920 18:00:22.072765  256536 main.go:141] libmachine: (ha-347193-m03) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:00:22.072834  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:22.072724  257331 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:00:22.072916  256536 main.go:141] libmachine: (ha-347193-m03) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:00:22.338205  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:22.338046  257331 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa...
	I0920 18:00:22.401743  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:22.401600  257331 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/ha-347193-m03.rawdisk...
	I0920 18:00:22.401769  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Writing magic tar header
	I0920 18:00:22.401826  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Writing SSH key tar header
	I0920 18:00:22.401856  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:22.401719  257331 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03 ...
	I0920 18:00:22.401875  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03 (perms=drwx------)
	I0920 18:00:22.401895  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:00:22.401963  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 18:00:22.401981  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03
	I0920 18:00:22.401996  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 18:00:22.402006  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:00:22.402019  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 18:00:22.402031  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 18:00:22.402043  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:00:22.402054  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:00:22.402064  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Checking permissions on dir: /home
	I0920 18:00:22.402077  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Skipping /home - not owner
	I0920 18:00:22.402112  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:00:22.402132  256536 main.go:141] libmachine: (ha-347193-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:00:22.402145  256536 main.go:141] libmachine: (ha-347193-m03) Creating domain...
	I0920 18:00:22.403163  256536 main.go:141] libmachine: (ha-347193-m03) define libvirt domain using xml: 
	I0920 18:00:22.403182  256536 main.go:141] libmachine: (ha-347193-m03) <domain type='kvm'>
	I0920 18:00:22.403192  256536 main.go:141] libmachine: (ha-347193-m03)   <name>ha-347193-m03</name>
	I0920 18:00:22.403198  256536 main.go:141] libmachine: (ha-347193-m03)   <memory unit='MiB'>2200</memory>
	I0920 18:00:22.403205  256536 main.go:141] libmachine: (ha-347193-m03)   <vcpu>2</vcpu>
	I0920 18:00:22.403215  256536 main.go:141] libmachine: (ha-347193-m03)   <features>
	I0920 18:00:22.403225  256536 main.go:141] libmachine: (ha-347193-m03)     <acpi/>
	I0920 18:00:22.403233  256536 main.go:141] libmachine: (ha-347193-m03)     <apic/>
	I0920 18:00:22.403245  256536 main.go:141] libmachine: (ha-347193-m03)     <pae/>
	I0920 18:00:22.403253  256536 main.go:141] libmachine: (ha-347193-m03)     
	I0920 18:00:22.403263  256536 main.go:141] libmachine: (ha-347193-m03)   </features>
	I0920 18:00:22.403273  256536 main.go:141] libmachine: (ha-347193-m03)   <cpu mode='host-passthrough'>
	I0920 18:00:22.403286  256536 main.go:141] libmachine: (ha-347193-m03)   
	I0920 18:00:22.403296  256536 main.go:141] libmachine: (ha-347193-m03)   </cpu>
	I0920 18:00:22.403305  256536 main.go:141] libmachine: (ha-347193-m03)   <os>
	I0920 18:00:22.403315  256536 main.go:141] libmachine: (ha-347193-m03)     <type>hvm</type>
	I0920 18:00:22.403326  256536 main.go:141] libmachine: (ha-347193-m03)     <boot dev='cdrom'/>
	I0920 18:00:22.403336  256536 main.go:141] libmachine: (ha-347193-m03)     <boot dev='hd'/>
	I0920 18:00:22.403346  256536 main.go:141] libmachine: (ha-347193-m03)     <bootmenu enable='no'/>
	I0920 18:00:22.403355  256536 main.go:141] libmachine: (ha-347193-m03)   </os>
	I0920 18:00:22.403364  256536 main.go:141] libmachine: (ha-347193-m03)   <devices>
	I0920 18:00:22.403375  256536 main.go:141] libmachine: (ha-347193-m03)     <disk type='file' device='cdrom'>
	I0920 18:00:22.403406  256536 main.go:141] libmachine: (ha-347193-m03)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/boot2docker.iso'/>
	I0920 18:00:22.403432  256536 main.go:141] libmachine: (ha-347193-m03)       <target dev='hdc' bus='scsi'/>
	I0920 18:00:22.403442  256536 main.go:141] libmachine: (ha-347193-m03)       <readonly/>
	I0920 18:00:22.403452  256536 main.go:141] libmachine: (ha-347193-m03)     </disk>
	I0920 18:00:22.403465  256536 main.go:141] libmachine: (ha-347193-m03)     <disk type='file' device='disk'>
	I0920 18:00:22.403477  256536 main.go:141] libmachine: (ha-347193-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:00:22.403493  256536 main.go:141] libmachine: (ha-347193-m03)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/ha-347193-m03.rawdisk'/>
	I0920 18:00:22.403506  256536 main.go:141] libmachine: (ha-347193-m03)       <target dev='hda' bus='virtio'/>
	I0920 18:00:22.403515  256536 main.go:141] libmachine: (ha-347193-m03)     </disk>
	I0920 18:00:22.403522  256536 main.go:141] libmachine: (ha-347193-m03)     <interface type='network'>
	I0920 18:00:22.403530  256536 main.go:141] libmachine: (ha-347193-m03)       <source network='mk-ha-347193'/>
	I0920 18:00:22.403537  256536 main.go:141] libmachine: (ha-347193-m03)       <model type='virtio'/>
	I0920 18:00:22.403545  256536 main.go:141] libmachine: (ha-347193-m03)     </interface>
	I0920 18:00:22.403554  256536 main.go:141] libmachine: (ha-347193-m03)     <interface type='network'>
	I0920 18:00:22.403563  256536 main.go:141] libmachine: (ha-347193-m03)       <source network='default'/>
	I0920 18:00:22.403572  256536 main.go:141] libmachine: (ha-347193-m03)       <model type='virtio'/>
	I0920 18:00:22.403580  256536 main.go:141] libmachine: (ha-347193-m03)     </interface>
	I0920 18:00:22.403598  256536 main.go:141] libmachine: (ha-347193-m03)     <serial type='pty'>
	I0920 18:00:22.403608  256536 main.go:141] libmachine: (ha-347193-m03)       <target port='0'/>
	I0920 18:00:22.403614  256536 main.go:141] libmachine: (ha-347193-m03)     </serial>
	I0920 18:00:22.403626  256536 main.go:141] libmachine: (ha-347193-m03)     <console type='pty'>
	I0920 18:00:22.403638  256536 main.go:141] libmachine: (ha-347193-m03)       <target type='serial' port='0'/>
	I0920 18:00:22.403648  256536 main.go:141] libmachine: (ha-347193-m03)     </console>
	I0920 18:00:22.403655  256536 main.go:141] libmachine: (ha-347193-m03)     <rng model='virtio'>
	I0920 18:00:22.403665  256536 main.go:141] libmachine: (ha-347193-m03)       <backend model='random'>/dev/random</backend>
	I0920 18:00:22.403669  256536 main.go:141] libmachine: (ha-347193-m03)     </rng>
	I0920 18:00:22.403674  256536 main.go:141] libmachine: (ha-347193-m03)     
	I0920 18:00:22.403680  256536 main.go:141] libmachine: (ha-347193-m03)     
	I0920 18:00:22.403685  256536 main.go:141] libmachine: (ha-347193-m03)   </devices>
	I0920 18:00:22.403691  256536 main.go:141] libmachine: (ha-347193-m03) </domain>
	I0920 18:00:22.403701  256536 main.go:141] libmachine: (ha-347193-m03) 
	I0920 18:00:22.411929  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:7f:8c:82 in network default
	I0920 18:00:22.412669  256536 main.go:141] libmachine: (ha-347193-m03) Ensuring networks are active...
	I0920 18:00:22.412689  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:22.413649  256536 main.go:141] libmachine: (ha-347193-m03) Ensuring network default is active
	I0920 18:00:22.414029  256536 main.go:141] libmachine: (ha-347193-m03) Ensuring network mk-ha-347193 is active
	I0920 18:00:22.414605  256536 main.go:141] libmachine: (ha-347193-m03) Getting domain xml...
	I0920 18:00:22.415371  256536 main.go:141] libmachine: (ha-347193-m03) Creating domain...
	I0920 18:00:23.690471  256536 main.go:141] libmachine: (ha-347193-m03) Waiting to get IP...
	I0920 18:00:23.691341  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:23.691801  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:23.691826  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:23.691771  257331 retry.go:31] will retry after 305.28803ms: waiting for machine to come up
	I0920 18:00:23.998411  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:23.999018  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:23.999037  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:23.998982  257331 retry.go:31] will retry after 325.282486ms: waiting for machine to come up
	I0920 18:00:24.325459  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:24.326038  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:24.326064  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:24.325997  257331 retry.go:31] will retry after 443.699467ms: waiting for machine to come up
	I0920 18:00:24.771839  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:24.772332  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:24.772360  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:24.772272  257331 retry.go:31] will retry after 425.456586ms: waiting for machine to come up
	I0920 18:00:25.199046  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:25.199733  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:25.199762  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:25.199691  257331 retry.go:31] will retry after 471.75067ms: waiting for machine to come up
	I0920 18:00:25.673494  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:25.674017  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:25.674046  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:25.673921  257331 retry.go:31] will retry after 587.223627ms: waiting for machine to come up
	I0920 18:00:26.262671  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:26.263313  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:26.263345  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:26.263252  257331 retry.go:31] will retry after 883.317566ms: waiting for machine to come up
	I0920 18:00:27.148800  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:27.149230  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:27.149252  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:27.149182  257331 retry.go:31] will retry after 1.299880509s: waiting for machine to come up
	I0920 18:00:28.450607  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:28.451213  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:28.451237  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:28.451146  257331 retry.go:31] will retry after 1.154105376s: waiting for machine to come up
	I0920 18:00:29.607236  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:29.607729  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:29.607762  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:29.607684  257331 retry.go:31] will retry after 1.399507975s: waiting for machine to come up
	I0920 18:00:31.009117  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:31.009614  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:31.009645  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:31.009556  257331 retry.go:31] will retry after 2.255483173s: waiting for machine to come up
	I0920 18:00:33.266732  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:33.267250  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:33.267280  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:33.267181  257331 retry.go:31] will retry after 3.331108113s: waiting for machine to come up
	I0920 18:00:36.602825  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:36.603313  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:36.603336  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:36.603267  257331 retry.go:31] will retry after 4.086437861s: waiting for machine to come up
	I0920 18:00:40.692990  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:40.693433  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find current IP address of domain ha-347193-m03 in network mk-ha-347193
	I0920 18:00:40.693462  256536 main.go:141] libmachine: (ha-347193-m03) DBG | I0920 18:00:40.693375  257331 retry.go:31] will retry after 5.025372778s: waiting for machine to come up
	I0920 18:00:45.723079  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.723614  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has current primary IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.723644  256536 main.go:141] libmachine: (ha-347193-m03) Found IP for machine: 192.168.39.250
	I0920 18:00:45.723658  256536 main.go:141] libmachine: (ha-347193-m03) Reserving static IP address...
	I0920 18:00:45.724041  256536 main.go:141] libmachine: (ha-347193-m03) DBG | unable to find host DHCP lease matching {name: "ha-347193-m03", mac: "52:54:00:80:1a:4c", ip: "192.168.39.250"} in network mk-ha-347193
	I0920 18:00:45.808270  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Getting to WaitForSSH function...
	I0920 18:00:45.808305  256536 main.go:141] libmachine: (ha-347193-m03) Reserved static IP address: 192.168.39.250
	I0920 18:00:45.808317  256536 main.go:141] libmachine: (ha-347193-m03) Waiting for SSH to be available...
	I0920 18:00:45.811196  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.811660  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:45.811697  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.811825  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Using SSH client type: external
	I0920 18:00:45.811848  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa (-rw-------)
	I0920 18:00:45.811941  256536 main.go:141] libmachine: (ha-347193-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:00:45.811975  256536 main.go:141] libmachine: (ha-347193-m03) DBG | About to run SSH command:
	I0920 18:00:45.811991  256536 main.go:141] libmachine: (ha-347193-m03) DBG | exit 0
	I0920 18:00:45.942448  256536 main.go:141] libmachine: (ha-347193-m03) DBG | SSH cmd err, output: <nil>: 
	I0920 18:00:45.942757  256536 main.go:141] libmachine: (ha-347193-m03) KVM machine creation complete!
	I0920 18:00:45.943036  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetConfigRaw
	I0920 18:00:45.943611  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:45.943802  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:45.943956  256536 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:00:45.943968  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetState
	I0920 18:00:45.945108  256536 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:00:45.945127  256536 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:00:45.945134  256536 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:00:45.945143  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:45.947795  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.948180  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:45.948212  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:45.948362  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:45.948540  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:45.948731  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:45.948909  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:45.949088  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:45.949376  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:45.949397  256536 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:00:46.053564  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:00:46.053620  256536 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:00:46.053632  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.056590  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.057022  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.057055  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.057256  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:46.057474  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.057655  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.057801  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:46.058159  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:46.058349  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:46.058359  256536 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:00:46.162650  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:00:46.162739  256536 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:00:46.162750  256536 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:00:46.162759  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetMachineName
	I0920 18:00:46.163059  256536 buildroot.go:166] provisioning hostname "ha-347193-m03"
	I0920 18:00:46.163088  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetMachineName
	I0920 18:00:46.163316  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.166267  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.166667  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.166690  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.166891  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:46.167092  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.167331  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.167501  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:46.167710  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:46.167873  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:46.167885  256536 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-347193-m03 && echo "ha-347193-m03" | sudo tee /etc/hostname
	I0920 18:00:46.284161  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-347193-m03
	
	I0920 18:00:46.284194  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.287604  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.288162  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.288212  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.288377  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:46.288598  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.288781  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.288997  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:46.289164  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:46.289333  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:46.289348  256536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-347193-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-347193-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-347193-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:00:46.403249  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:00:46.403284  256536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 18:00:46.403312  256536 buildroot.go:174] setting up certificates
	I0920 18:00:46.403323  256536 provision.go:84] configureAuth start
	I0920 18:00:46.403334  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetMachineName
	I0920 18:00:46.403661  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetIP
	I0920 18:00:46.407072  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.407456  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.407507  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.407605  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.410105  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.410437  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.410474  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.410693  256536 provision.go:143] copyHostCerts
	I0920 18:00:46.410731  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:00:46.410776  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 18:00:46.410788  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:00:46.410872  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 18:00:46.410969  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:00:46.410999  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 18:00:46.411009  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:00:46.411048  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 18:00:46.411112  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:00:46.411134  256536 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 18:00:46.411141  256536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:00:46.411174  256536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 18:00:46.411245  256536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.ha-347193-m03 san=[127.0.0.1 192.168.39.250 ha-347193-m03 localhost minikube]
	I0920 18:00:46.589496  256536 provision.go:177] copyRemoteCerts
	I0920 18:00:46.589576  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:00:46.589611  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.592753  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.593174  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.593204  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.593452  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:46.593684  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.593864  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:46.594009  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa Username:docker}
	I0920 18:00:46.676664  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:00:46.676774  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:00:46.702866  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:00:46.702960  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:00:46.728033  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:00:46.728125  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:00:46.752902  256536 provision.go:87] duration metric: took 349.552078ms to configureAuth
	I0920 18:00:46.752934  256536 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:00:46.753136  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:00:46.753210  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:46.755906  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.756375  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:46.756398  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:46.756668  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:46.756899  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.757160  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:46.757332  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:46.757510  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:46.757706  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:46.757726  256536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:00:46.996420  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:00:46.996456  256536 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:00:46.996468  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetURL
	I0920 18:00:46.998173  256536 main.go:141] libmachine: (ha-347193-m03) DBG | Using libvirt version 6000000
	I0920 18:00:47.000536  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.000948  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.001005  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.001175  256536 main.go:141] libmachine: Docker is up and running!
	I0920 18:00:47.001193  256536 main.go:141] libmachine: Reticulating splines...
	I0920 18:00:47.001204  256536 client.go:171] duration metric: took 24.931317889s to LocalClient.Create
	I0920 18:00:47.001232  256536 start.go:167] duration metric: took 24.931386973s to libmachine.API.Create "ha-347193"
	I0920 18:00:47.001245  256536 start.go:293] postStartSetup for "ha-347193-m03" (driver="kvm2")
	I0920 18:00:47.001262  256536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:00:47.001288  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:47.001582  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:00:47.001615  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:47.005636  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.006217  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.006249  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.006471  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:47.006730  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:47.006897  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:47.007131  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa Username:docker}
	I0920 18:00:47.088575  256536 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:00:47.093116  256536 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:00:47.093144  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 18:00:47.093215  256536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 18:00:47.093286  256536 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 18:00:47.093296  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /etc/ssl/certs/2448492.pem
	I0920 18:00:47.093380  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:00:47.103343  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:00:47.129139  256536 start.go:296] duration metric: took 127.87289ms for postStartSetup
	I0920 18:00:47.129196  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetConfigRaw
	I0920 18:00:47.129896  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetIP
	I0920 18:00:47.132942  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.133411  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.133437  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.133773  256536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 18:00:47.134091  256536 start.go:128] duration metric: took 25.084442035s to createHost
	I0920 18:00:47.134127  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:47.136774  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.137134  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.137159  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.137348  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:47.137616  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:47.137786  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:47.137992  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:47.138197  256536 main.go:141] libmachine: Using SSH client type: native
	I0920 18:00:47.138375  256536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0920 18:00:47.138386  256536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:00:47.242925  256536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855247.221790500
	
	I0920 18:00:47.242952  256536 fix.go:216] guest clock: 1726855247.221790500
	I0920 18:00:47.242962  256536 fix.go:229] Guest: 2024-09-20 18:00:47.2217905 +0000 UTC Remote: 2024-09-20 18:00:47.134109422 +0000 UTC m=+147.450601767 (delta=87.681078ms)
	I0920 18:00:47.242983  256536 fix.go:200] guest clock delta is within tolerance: 87.681078ms
	I0920 18:00:47.242988  256536 start.go:83] releasing machines lock for "ha-347193-m03", held for 25.193498164s
	I0920 18:00:47.243006  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:47.243300  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetIP
	I0920 18:00:47.246354  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.246809  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.246844  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.249405  256536 out.go:177] * Found network options:
	I0920 18:00:47.251083  256536 out.go:177]   - NO_PROXY=192.168.39.246,192.168.39.241
	W0920 18:00:47.252536  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 18:00:47.252563  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:00:47.252582  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:47.253272  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:47.253546  256536 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:00:47.253662  256536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:00:47.253727  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	W0920 18:00:47.253771  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 18:00:47.253799  256536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:00:47.253880  256536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:00:47.253928  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:00:47.256829  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.256923  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.257208  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.257233  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.257309  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:47.257347  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:47.257407  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:47.257616  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:47.257619  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:00:47.257870  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:47.257875  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:00:47.258038  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa Username:docker}
	I0920 18:00:47.258107  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:00:47.258329  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa Username:docker}
	I0920 18:00:47.495115  256536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:00:47.501076  256536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:00:47.501151  256536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:00:47.517330  256536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:00:47.517360  256536 start.go:495] detecting cgroup driver to use...
	I0920 18:00:47.517421  256536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:00:47.534608  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:00:47.549798  256536 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:00:47.549868  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:00:47.564991  256536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:00:47.580654  256536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:00:47.705785  256536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:00:47.870467  256536 docker.go:233] disabling docker service ...
	I0920 18:00:47.870543  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:00:47.889659  256536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:00:47.904008  256536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:00:48.037069  256536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:00:48.172437  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:00:48.186077  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:00:48.205661  256536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:00:48.205724  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.216421  256536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:00:48.216509  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.228291  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.239306  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.249763  256536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:00:48.260784  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.271597  256536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.290072  256536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:00:48.301232  256536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:00:48.311548  256536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:00:48.311624  256536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:00:48.327406  256536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:00:48.338454  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:00:48.463827  256536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:00:48.563927  256536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:00:48.564016  256536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:00:48.569050  256536 start.go:563] Will wait 60s for crictl version
	I0920 18:00:48.569137  256536 ssh_runner.go:195] Run: which crictl
	I0920 18:00:48.573089  256536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:00:48.612882  256536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:00:48.612989  256536 ssh_runner.go:195] Run: crio --version
	I0920 18:00:48.641884  256536 ssh_runner.go:195] Run: crio --version
	I0920 18:00:48.674772  256536 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:00:48.676208  256536 out.go:177]   - env NO_PROXY=192.168.39.246
	I0920 18:00:48.677575  256536 out.go:177]   - env NO_PROXY=192.168.39.246,192.168.39.241
	I0920 18:00:48.679175  256536 main.go:141] libmachine: (ha-347193-m03) Calling .GetIP
	I0920 18:00:48.682184  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:48.682668  256536 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:00:48.682700  256536 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:00:48.682899  256536 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:00:48.687203  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:00:48.700132  256536 mustload.go:65] Loading cluster: ha-347193
	I0920 18:00:48.700432  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:00:48.700738  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:00:48.700780  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:00:48.718208  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I0920 18:00:48.718740  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:00:48.719373  256536 main.go:141] libmachine: Using API Version  1
	I0920 18:00:48.719397  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:00:48.719797  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:00:48.720025  256536 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 18:00:48.722026  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 18:00:48.722319  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:00:48.722366  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:00:48.738476  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42131
	I0920 18:00:48.739047  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:00:48.739709  256536 main.go:141] libmachine: Using API Version  1
	I0920 18:00:48.739737  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:00:48.740150  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:00:48.740408  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:00:48.740641  256536 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193 for IP: 192.168.39.250
	I0920 18:00:48.740657  256536 certs.go:194] generating shared ca certs ...
	I0920 18:00:48.740678  256536 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:00:48.740861  256536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 18:00:48.740924  256536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 18:00:48.740938  256536 certs.go:256] generating profile certs ...
	I0920 18:00:48.741049  256536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key
	I0920 18:00:48.741086  256536 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.071b5fb5
	I0920 18:00:48.741106  256536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.071b5fb5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.241 192.168.39.250 192.168.39.254]
	I0920 18:00:48.849787  256536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.071b5fb5 ...
	I0920 18:00:48.849825  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.071b5fb5: {Name:mk94b8924122fda4caf4db9161420b6f420a2437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:00:48.850030  256536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.071b5fb5 ...
	I0920 18:00:48.850042  256536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.071b5fb5: {Name:mk6d1c5532994e70c91ba359922d7d11837270cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:00:48.850120  256536 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.071b5fb5 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt
	I0920 18:00:48.850256  256536 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.071b5fb5 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key
	I0920 18:00:48.850383  256536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key
	I0920 18:00:48.850401  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:00:48.850413  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:00:48.850425  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:00:48.850434  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:00:48.850447  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:00:48.850458  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:00:48.850472  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:00:48.866055  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:00:48.866157  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 18:00:48.866197  256536 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 18:00:48.866207  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:00:48.866228  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:00:48.866250  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:00:48.866268  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 18:00:48.866305  256536 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:00:48.866332  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:00:48.866346  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem -> /usr/share/ca-certificates/244849.pem
	I0920 18:00:48.866361  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /usr/share/ca-certificates/2448492.pem
	I0920 18:00:48.866398  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:00:48.869320  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:00:48.869797  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:00:48.869831  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:00:48.870003  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:00:48.870250  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:00:48.870392  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:00:48.870532  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 18:00:48.946355  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 18:00:48.951957  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 18:00:48.963708  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 18:00:48.968268  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0920 18:00:48.979656  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 18:00:48.983832  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 18:00:48.995975  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 18:00:48.999924  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0920 18:00:49.010455  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 18:00:49.014784  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 18:00:49.025741  256536 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 18:00:49.030881  256536 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0920 18:00:49.042858  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:00:49.071216  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:00:49.096135  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:00:49.120994  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:00:49.146256  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0920 18:00:49.170936  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:00:49.195738  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:00:49.219660  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:00:49.243873  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:00:49.268501  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 18:00:49.293119  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 18:00:49.317663  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 18:00:49.336046  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0920 18:00:49.352794  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 18:00:49.370728  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0920 18:00:49.388727  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 18:00:49.406268  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0920 18:00:49.422685  256536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 18:00:49.439002  256536 ssh_runner.go:195] Run: openssl version
	I0920 18:00:49.444882  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:00:49.456482  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:00:49.461403  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:00:49.461480  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:00:49.470070  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:00:49.481997  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 18:00:49.496420  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 18:00:49.501453  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:00:49.501530  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 18:00:49.508441  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 18:00:49.521740  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 18:00:49.535641  256536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 18:00:49.541368  256536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:00:49.541431  256536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 18:00:49.547775  256536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:00:49.559535  256536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:00:49.563545  256536 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:00:49.563612  256536 kubeadm.go:934] updating node {m03 192.168.39.250 8443 v1.31.1 crio true true} ...
	I0920 18:00:49.563727  256536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-347193-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:00:49.563772  256536 kube-vip.go:115] generating kube-vip config ...
	I0920 18:00:49.563822  256536 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:00:49.580897  256536 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:00:49.580978  256536 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:00:49.581038  256536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:00:49.590566  256536 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 18:00:49.590695  256536 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 18:00:49.600047  256536 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 18:00:49.600048  256536 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 18:00:49.600092  256536 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 18:00:49.600085  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:00:49.600108  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:00:49.600145  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:00:49.600623  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:00:49.600694  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:00:49.606126  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 18:00:49.606169  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 18:00:49.632538  256536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:00:49.632673  256536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:00:49.632669  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 18:00:49.632772  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 18:00:49.675110  256536 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 18:00:49.675165  256536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 18:00:50.517293  256536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 18:00:50.527931  256536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 18:00:50.545163  256536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:00:50.562804  256536 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:00:50.579873  256536 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:00:50.583899  256536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:00:50.595871  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:00:50.727492  256536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:00:50.746998  256536 host.go:66] Checking if "ha-347193" exists ...
	I0920 18:00:50.747552  256536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:00:50.747621  256536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:00:50.764998  256536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35973
	I0920 18:00:50.765568  256536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:00:50.766259  256536 main.go:141] libmachine: Using API Version  1
	I0920 18:00:50.766285  256536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:00:50.766697  256536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:00:50.766924  256536 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:00:50.767151  256536 start.go:317] joinCluster: &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:00:50.767302  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 18:00:50.767319  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:00:50.770123  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:00:50.770554  256536 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:00:50.770590  256536 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:00:50.770696  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:00:50.770948  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:00:50.771120  256536 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:00:50.771276  256536 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 18:00:50.937328  256536 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:00:50.937401  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token w3dh0u.en8aqh39le5u0uln --discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-347193-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443"
	I0920 18:01:13.927196  256536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token w3dh0u.en8aqh39le5u0uln --discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-347193-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443": (22.989760407s)
	I0920 18:01:13.927243  256536 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 18:01:14.543516  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-347193-m03 minikube.k8s.io/updated_at=2024_09_20T18_01_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=ha-347193 minikube.k8s.io/primary=false
	I0920 18:01:14.679099  256536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-347193-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 18:01:14.820428  256536 start.go:319] duration metric: took 24.053268109s to joinCluster
	I0920 18:01:14.820517  256536 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:01:14.820875  256536 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:01:14.822533  256536 out.go:177] * Verifying Kubernetes components...
	I0920 18:01:14.823874  256536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:01:15.125787  256536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:01:15.183134  256536 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:01:15.183424  256536 kapi.go:59] client config for ha-347193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key", CAFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 18:01:15.183503  256536 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I0920 18:01:15.183888  256536 node_ready.go:35] waiting up to 6m0s for node "ha-347193-m03" to be "Ready" ...
	I0920 18:01:15.184021  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:15.184034  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:15.184045  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:15.184057  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:15.188812  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:15.684732  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:15.684762  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:15.684773  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:15.684779  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:15.688455  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:16.184249  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:16.184278  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:16.184290  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:16.184296  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:16.188149  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:16.684238  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:16.684266  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:16.684276  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:16.684280  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:16.688135  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:17.184574  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:17.184605  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:17.184616  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:17.184622  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:17.188720  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:17.189742  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:17.684157  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:17.684188  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:17.684200  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:17.684205  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:17.687993  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:18.184987  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:18.185016  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:18.185027  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:18.185033  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:18.188436  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:18.684240  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:18.684263  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:18.684270  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:18.684274  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:18.688063  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:19.184814  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:19.184846  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:19.184859  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:19.184868  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:19.189842  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:19.190448  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:19.684861  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:19.684890  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:19.684901  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:19.684908  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:19.688056  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:20.184157  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:20.184183  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:20.184192  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:20.184196  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:20.190785  256536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 18:01:20.684195  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:20.684230  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:20.684241  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:20.684245  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:20.688027  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:21.185183  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:21.185207  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:21.185216  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:21.185221  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:21.188774  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:21.684314  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:21.684338  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:21.684350  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:21.684355  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:21.687635  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:21.688202  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:22.185048  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:22.185073  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:22.185084  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:22.185089  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:22.188754  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:22.684520  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:22.684570  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:22.684579  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:22.684584  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:22.688376  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:23.184575  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:23.184600  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:23.184608  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:23.184612  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:23.189052  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:23.684932  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:23.684955  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:23.684965  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:23.684968  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:23.688597  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:23.689108  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:24.184308  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:24.184334  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:24.184344  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:24.184350  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:24.188092  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:24.684218  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:24.684252  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:24.684261  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:24.684264  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:24.688018  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:25.184193  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:25.184221  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:25.184232  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:25.184237  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:25.188243  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:25.684786  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:25.684818  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:25.684830  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:25.684837  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:25.687395  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:01:26.184220  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:26.184255  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:26.184270  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:26.184273  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:26.188544  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:26.189181  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:26.684404  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:26.684432  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:26.684445  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:26.684452  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:26.688821  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:27.184155  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:27.184182  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:27.184191  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:27.184194  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:27.187676  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:27.684611  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:27.684643  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:27.684651  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:27.684654  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:27.688751  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:28.184312  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:28.184339  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:28.184347  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:28.184350  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:28.188272  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:28.684161  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:28.684200  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:28.684208  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:28.684212  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:28.687898  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:28.688502  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:29.184527  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:29.184554  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:29.184563  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:29.184570  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:29.188227  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:29.685118  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:29.685147  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:29.685157  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:29.685159  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:29.689095  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:30.184672  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:30.184697  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:30.184705  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:30.184709  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:30.188058  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:30.685162  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:30.685189  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:30.685200  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:30.685206  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:30.688686  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:30.689119  256536 node_ready.go:53] node "ha-347193-m03" has status "Ready":"False"
	I0920 18:01:31.184362  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:31.184388  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:31.184397  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:31.184401  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:31.188508  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:31.684348  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:31.684374  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:31.684382  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:31.684388  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:31.688113  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:32.184592  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:32.184620  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.184629  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.184633  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.188695  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:32.684894  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:32.684920  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.684929  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.684933  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.688521  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:32.689073  256536 node_ready.go:49] node "ha-347193-m03" has status "Ready":"True"
	I0920 18:01:32.689098  256536 node_ready.go:38] duration metric: took 17.505173835s for node "ha-347193-m03" to be "Ready" ...
	I0920 18:01:32.689108  256536 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:01:32.689179  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:01:32.689189  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.689196  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.689200  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.713301  256536 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0920 18:01:32.721489  256536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.721627  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6llmd
	I0920 18:01:32.721638  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.721649  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.721660  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.731687  256536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0920 18:01:32.732373  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:32.732393  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.732404  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.732410  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.740976  256536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 18:01:32.741470  256536 pod_ready.go:93] pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:32.741487  256536 pod_ready.go:82] duration metric: took 19.962818ms for pod "coredns-7c65d6cfc9-6llmd" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.741496  256536 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.741558  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bkmhn
	I0920 18:01:32.741564  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.741572  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.741578  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.754720  256536 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0920 18:01:32.755448  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:32.755463  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.755471  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.755475  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.764627  256536 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0920 18:01:32.765312  256536 pod_ready.go:93] pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:32.765342  256536 pod_ready.go:82] duration metric: took 23.838489ms for pod "coredns-7c65d6cfc9-bkmhn" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.765357  256536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.765462  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193
	I0920 18:01:32.765474  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.765484  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.765492  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.774103  256536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 18:01:32.774830  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:32.774850  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.774858  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.774861  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.777561  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:01:32.778082  256536 pod_ready.go:93] pod "etcd-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:32.778110  256536 pod_ready.go:82] duration metric: took 12.744363ms for pod "etcd-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.778122  256536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.778202  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193-m02
	I0920 18:01:32.778213  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.778225  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.778234  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.781035  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:01:32.781896  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:32.781933  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.781945  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.781950  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.784612  256536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:01:32.785026  256536 pod_ready.go:93] pod "etcd-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:32.785044  256536 pod_ready.go:82] duration metric: took 6.912479ms for pod "etcd-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.785057  256536 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:32.885398  256536 request.go:632] Waited for 100.268978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193-m03
	I0920 18:01:32.885496  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-347193-m03
	I0920 18:01:32.885505  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:32.885513  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:32.885520  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:32.889795  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:33.084880  256536 request.go:632] Waited for 194.30681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:33.084946  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:33.084952  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:33.084960  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:33.084964  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:33.088321  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:33.088961  256536 pod_ready.go:93] pod "etcd-ha-347193-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:33.088982  256536 pod_ready.go:82] duration metric: took 303.916513ms for pod "etcd-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:33.089001  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:33.285463  256536 request.go:632] Waited for 196.366216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193
	I0920 18:01:33.285538  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193
	I0920 18:01:33.285544  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:33.285553  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:33.285557  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:33.289153  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:33.485283  256536 request.go:632] Waited for 195.396109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:33.485343  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:33.485349  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:33.485363  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:33.485368  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:33.488640  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:33.489171  256536 pod_ready.go:93] pod "kube-apiserver-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:33.489194  256536 pod_ready.go:82] duration metric: took 400.186326ms for pod "kube-apiserver-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:33.489203  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:33.685381  256536 request.go:632] Waited for 196.09905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m02
	I0920 18:01:33.685495  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m02
	I0920 18:01:33.685509  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:33.685526  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:33.685534  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:33.689644  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:33.885477  256536 request.go:632] Waited for 194.996096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:33.885557  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:33.885565  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:33.885575  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:33.885584  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:33.888804  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:33.889531  256536 pod_ready.go:93] pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:33.889552  256536 pod_ready.go:82] duration metric: took 400.342117ms for pod "kube-apiserver-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:33.889562  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:34.085670  256536 request.go:632] Waited for 196.018178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m03
	I0920 18:01:34.085746  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-347193-m03
	I0920 18:01:34.085754  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:34.085766  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:34.085774  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:34.089521  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:34.285667  256536 request.go:632] Waited for 195.397565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:34.285731  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:34.285736  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:34.285744  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:34.285747  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:34.289576  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:34.290194  256536 pod_ready.go:93] pod "kube-apiserver-ha-347193-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:34.290225  256536 pod_ready.go:82] duration metric: took 400.654429ms for pod "kube-apiserver-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:34.290241  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:34.485359  256536 request.go:632] Waited for 195.022891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193
	I0920 18:01:34.485429  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193
	I0920 18:01:34.485446  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:34.485459  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:34.485466  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:34.489143  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:34.685371  256536 request.go:632] Waited for 195.396623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:34.685455  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:34.685461  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:34.685471  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:34.685477  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:34.688902  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:34.689635  256536 pod_ready.go:93] pod "kube-controller-manager-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:34.689658  256536 pod_ready.go:82] duration metric: took 399.407979ms for pod "kube-controller-manager-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:34.689671  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:34.885295  256536 request.go:632] Waited for 195.53866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m02
	I0920 18:01:34.885360  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m02
	I0920 18:01:34.885365  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:34.885373  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:34.885377  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:34.888992  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.085267  256536 request.go:632] Waited for 195.362009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:35.085328  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:35.085334  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:35.085345  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:35.085356  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:35.088980  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.090052  256536 pod_ready.go:93] pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:35.090080  256536 pod_ready.go:82] duration metric: took 400.399772ms for pod "kube-controller-manager-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:35.090093  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:35.285052  256536 request.go:632] Waited for 194.845569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m03
	I0920 18:01:35.285131  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-347193-m03
	I0920 18:01:35.285140  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:35.285150  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:35.285160  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:35.288701  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.484934  256536 request.go:632] Waited for 195.307179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:35.485011  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:35.485016  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:35.485024  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:35.485033  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:35.488224  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.488823  256536 pod_ready.go:93] pod "kube-controller-manager-ha-347193-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:35.488842  256536 pod_ready.go:82] duration metric: took 398.741341ms for pod "kube-controller-manager-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:35.488859  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ffdvq" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:35.684978  256536 request.go:632] Waited for 196.047954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ffdvq
	I0920 18:01:35.685045  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ffdvq
	I0920 18:01:35.685051  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:35.685059  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:35.685063  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:35.689004  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.885928  256536 request.go:632] Waited for 196.269085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:35.886004  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:35.886014  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:35.886025  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:35.886035  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:35.889926  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:35.890483  256536 pod_ready.go:93] pod "kube-proxy-ffdvq" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:35.890511  256536 pod_ready.go:82] duration metric: took 401.643812ms for pod "kube-proxy-ffdvq" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:35.890526  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pccxp" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:36.085261  256536 request.go:632] Waited for 194.62795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pccxp
	I0920 18:01:36.085385  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pccxp
	I0920 18:01:36.085393  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:36.085402  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:36.085408  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:36.089652  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:36.285734  256536 request.go:632] Waited for 195.416978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:36.285799  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:36.285804  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:36.285812  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:36.285816  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:36.289287  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:36.289898  256536 pod_ready.go:93] pod "kube-proxy-pccxp" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:36.289950  256536 pod_ready.go:82] duration metric: took 399.411009ms for pod "kube-proxy-pccxp" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:36.289967  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rdqkg" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:36.484907  256536 request.go:632] Waited for 194.838014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdqkg
	I0920 18:01:36.485002  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdqkg
	I0920 18:01:36.485015  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:36.485026  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:36.485035  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:36.488569  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:36.685854  256536 request.go:632] Waited for 196.449208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:36.685961  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:36.685971  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:36.685979  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:36.685982  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:36.690267  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:36.691030  256536 pod_ready.go:93] pod "kube-proxy-rdqkg" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:36.691060  256536 pod_ready.go:82] duration metric: took 401.083761ms for pod "kube-proxy-rdqkg" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:36.691073  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:36.884877  256536 request.go:632] Waited for 193.713134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193
	I0920 18:01:36.884990  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193
	I0920 18:01:36.885002  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:36.885014  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:36.885023  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:36.888846  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:37.086004  256536 request.go:632] Waited for 196.564771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:37.086085  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193
	I0920 18:01:37.086094  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.086106  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.086115  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.090524  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:37.091265  256536 pod_ready.go:93] pod "kube-scheduler-ha-347193" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:37.091290  256536 pod_ready.go:82] duration metric: took 400.207966ms for pod "kube-scheduler-ha-347193" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:37.091300  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:37.285288  256536 request.go:632] Waited for 193.886376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m02
	I0920 18:01:37.285368  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m02
	I0920 18:01:37.285376  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.285388  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.285396  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.288742  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:37.485296  256536 request.go:632] Waited for 196.041594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:37.485365  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m02
	I0920 18:01:37.485370  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.485379  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.485382  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.488438  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:37.488873  256536 pod_ready.go:93] pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:37.488894  256536 pod_ready.go:82] duration metric: took 397.585949ms for pod "kube-scheduler-ha-347193-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:37.488904  256536 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:37.684947  256536 request.go:632] Waited for 195.929511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m03
	I0920 18:01:37.685019  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-347193-m03
	I0920 18:01:37.685027  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.685037  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.685042  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.688698  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:37.885884  256536 request.go:632] Waited for 196.412935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:37.885988  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-347193-m03
	I0920 18:01:37.885998  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.886006  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.886010  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.889509  256536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:01:37.890123  256536 pod_ready.go:93] pod "kube-scheduler-ha-347193-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:01:37.890146  256536 pod_ready.go:82] duration metric: took 401.23569ms for pod "kube-scheduler-ha-347193-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:01:37.890158  256536 pod_ready.go:39] duration metric: took 5.201039475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:01:37.890178  256536 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:01:37.890240  256536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:01:37.905594  256536 api_server.go:72] duration metric: took 23.085026432s to wait for apiserver process to appear ...
	I0920 18:01:37.905621  256536 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:01:37.905659  256536 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I0920 18:01:37.910576  256536 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I0920 18:01:37.910667  256536 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I0920 18:01:37.910679  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:37.910691  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:37.910701  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:37.911708  256536 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0920 18:01:37.911795  256536 api_server.go:141] control plane version: v1.31.1
	I0920 18:01:37.911813  256536 api_server.go:131] duration metric: took 6.185417ms to wait for apiserver health ...
	I0920 18:01:37.911822  256536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:01:38.085341  256536 request.go:632] Waited for 173.386572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:01:38.085419  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:01:38.085431  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:38.085456  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:38.085465  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:38.091784  256536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 18:01:38.097649  256536 system_pods.go:59] 24 kube-system pods found
	I0920 18:01:38.097681  256536 system_pods.go:61] "coredns-7c65d6cfc9-6llmd" [8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92] Running
	I0920 18:01:38.097686  256536 system_pods.go:61] "coredns-7c65d6cfc9-bkmhn" [f7862a6e-54cc-450c-b283-d20fb99f51ce] Running
	I0920 18:01:38.097691  256536 system_pods.go:61] "etcd-ha-347193" [e13fc198-b02b-4f0a-bf76-be0f519d9d57] Running
	I0920 18:01:38.097695  256536 system_pods.go:61] "etcd-ha-347193-m02" [4ea69953-b35a-4ae9-8153-cea3be5e2c1c] Running
	I0920 18:01:38.097698  256536 system_pods.go:61] "etcd-ha-347193-m03" [e83dd2f3-86bc-466d-9913-390f756db956] Running
	I0920 18:01:38.097701  256536 system_pods.go:61] "kindnet-5msnk" [af184b84-65ce-4ba0-879e-87ec81029f7e] Running
	I0920 18:01:38.097705  256536 system_pods.go:61] "kindnet-cqbxl" [3d49a6b1-5be5-4d96-98e3-bd05035a2d1b] Running
	I0920 18:01:38.097708  256536 system_pods.go:61] "kindnet-z24zp" [9271d251-2d95-4b23-85f3-7da6567b2fc3] Running
	I0920 18:01:38.097711  256536 system_pods.go:61] "kube-apiserver-ha-347193" [993ccf05-a39a-42b4-b82d-936531325dc4] Running
	I0920 18:01:38.097714  256536 system_pods.go:61] "kube-apiserver-ha-347193-m02" [43cd77b8-8925-4a04-a8cf-1b9a0cbbc502] Running
	I0920 18:01:38.097718  256536 system_pods.go:61] "kube-apiserver-ha-347193-m03" [02b7bcea-c245-4b1e-9be5-e815d4aceb74] Running
	I0920 18:01:38.097721  256536 system_pods.go:61] "kube-controller-manager-ha-347193" [6de3a14b-6587-45d4-aaee-1256b9c327cc] Running
	I0920 18:01:38.097724  256536 system_pods.go:61] "kube-controller-manager-ha-347193-m02" [cdf3f4d7-0675-4c59-8ad5-8901104d71c3] Running
	I0920 18:01:38.097727  256536 system_pods.go:61] "kube-controller-manager-ha-347193-m03" [3a4a0044-50e7-475a-9be9-76edda1c27ab] Running
	I0920 18:01:38.097729  256536 system_pods.go:61] "kube-proxy-ffdvq" [97120f62-0af2-405a-b8ff-639c72a39a2d] Running
	I0920 18:01:38.097732  256536 system_pods.go:61] "kube-proxy-pccxp" [3a4882b7-f59f-47d4-b2dc-d5b7f8f0d2c7] Running
	I0920 18:01:38.097735  256536 system_pods.go:61] "kube-proxy-rdqkg" [d9ae4e37-b29b-400a-af2d-544da4024069] Running
	I0920 18:01:38.097738  256536 system_pods.go:61] "kube-scheduler-ha-347193" [910baa0e-404e-4ac7-9262-848672eaf9cf] Running
	I0920 18:01:38.097743  256536 system_pods.go:61] "kube-scheduler-ha-347193-m02" [623b9c3b-b998-4516-a53e-17e9d8970594] Running
	I0920 18:01:38.097749  256536 system_pods.go:61] "kube-scheduler-ha-347193-m03" [cd08009b-7b3e-4c73-a2a0-824d43a19c0e] Running
	I0920 18:01:38.097751  256536 system_pods.go:61] "kube-vip-ha-347193" [20d6faa4-600f-4bd0-8acb-1f95c047da58] Running
	I0920 18:01:38.097754  256536 system_pods.go:61] "kube-vip-ha-347193-m02" [1455826c-7b3d-40f7-bb15-a9861ee95e19] Running
	I0920 18:01:38.097757  256536 system_pods.go:61] "kube-vip-ha-347193-m03" [d6b869ce-4510-400c-b8e9-6e3bec9718e4] Running
	I0920 18:01:38.097759  256536 system_pods.go:61] "storage-provisioner" [8924f7ce-85a0-4587-9c05-8a74c7113e9e] Running
	I0920 18:01:38.097766  256536 system_pods.go:74] duration metric: took 185.936377ms to wait for pod list to return data ...
	I0920 18:01:38.097773  256536 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:01:38.285212  256536 request.go:632] Waited for 187.355991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:01:38.285280  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:01:38.285285  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:38.285293  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:38.285298  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:38.290019  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:38.290139  256536 default_sa.go:45] found service account: "default"
	I0920 18:01:38.290156  256536 default_sa.go:55] duration metric: took 192.375892ms for default service account to be created ...
	I0920 18:01:38.290165  256536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:01:38.485546  256536 request.go:632] Waited for 195.287049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:01:38.485611  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0920 18:01:38.485616  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:38.485641  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:38.485645  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:38.491609  256536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:01:38.498558  256536 system_pods.go:86] 24 kube-system pods found
	I0920 18:01:38.498588  256536 system_pods.go:89] "coredns-7c65d6cfc9-6llmd" [8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92] Running
	I0920 18:01:38.498594  256536 system_pods.go:89] "coredns-7c65d6cfc9-bkmhn" [f7862a6e-54cc-450c-b283-d20fb99f51ce] Running
	I0920 18:01:38.498598  256536 system_pods.go:89] "etcd-ha-347193" [e13fc198-b02b-4f0a-bf76-be0f519d9d57] Running
	I0920 18:01:38.498602  256536 system_pods.go:89] "etcd-ha-347193-m02" [4ea69953-b35a-4ae9-8153-cea3be5e2c1c] Running
	I0920 18:01:38.498606  256536 system_pods.go:89] "etcd-ha-347193-m03" [e83dd2f3-86bc-466d-9913-390f756db956] Running
	I0920 18:01:38.498610  256536 system_pods.go:89] "kindnet-5msnk" [af184b84-65ce-4ba0-879e-87ec81029f7e] Running
	I0920 18:01:38.498614  256536 system_pods.go:89] "kindnet-cqbxl" [3d49a6b1-5be5-4d96-98e3-bd05035a2d1b] Running
	I0920 18:01:38.498618  256536 system_pods.go:89] "kindnet-z24zp" [9271d251-2d95-4b23-85f3-7da6567b2fc3] Running
	I0920 18:01:38.498622  256536 system_pods.go:89] "kube-apiserver-ha-347193" [993ccf05-a39a-42b4-b82d-936531325dc4] Running
	I0920 18:01:38.498625  256536 system_pods.go:89] "kube-apiserver-ha-347193-m02" [43cd77b8-8925-4a04-a8cf-1b9a0cbbc502] Running
	I0920 18:01:38.498629  256536 system_pods.go:89] "kube-apiserver-ha-347193-m03" [02b7bcea-c245-4b1e-9be5-e815d4aceb74] Running
	I0920 18:01:38.498634  256536 system_pods.go:89] "kube-controller-manager-ha-347193" [6de3a14b-6587-45d4-aaee-1256b9c327cc] Running
	I0920 18:01:38.498637  256536 system_pods.go:89] "kube-controller-manager-ha-347193-m02" [cdf3f4d7-0675-4c59-8ad5-8901104d71c3] Running
	I0920 18:01:38.498641  256536 system_pods.go:89] "kube-controller-manager-ha-347193-m03" [3a4a0044-50e7-475a-9be9-76edda1c27ab] Running
	I0920 18:01:38.498644  256536 system_pods.go:89] "kube-proxy-ffdvq" [97120f62-0af2-405a-b8ff-639c72a39a2d] Running
	I0920 18:01:38.498647  256536 system_pods.go:89] "kube-proxy-pccxp" [3a4882b7-f59f-47d4-b2dc-d5b7f8f0d2c7] Running
	I0920 18:01:38.498653  256536 system_pods.go:89] "kube-proxy-rdqkg" [d9ae4e37-b29b-400a-af2d-544da4024069] Running
	I0920 18:01:38.498658  256536 system_pods.go:89] "kube-scheduler-ha-347193" [910baa0e-404e-4ac7-9262-848672eaf9cf] Running
	I0920 18:01:38.498662  256536 system_pods.go:89] "kube-scheduler-ha-347193-m02" [623b9c3b-b998-4516-a53e-17e9d8970594] Running
	I0920 18:01:38.498666  256536 system_pods.go:89] "kube-scheduler-ha-347193-m03" [cd08009b-7b3e-4c73-a2a0-824d43a19c0e] Running
	I0920 18:01:38.498669  256536 system_pods.go:89] "kube-vip-ha-347193" [20d6faa4-600f-4bd0-8acb-1f95c047da58] Running
	I0920 18:01:38.498673  256536 system_pods.go:89] "kube-vip-ha-347193-m02" [1455826c-7b3d-40f7-bb15-a9861ee95e19] Running
	I0920 18:01:38.498677  256536 system_pods.go:89] "kube-vip-ha-347193-m03" [d6b869ce-4510-400c-b8e9-6e3bec9718e4] Running
	I0920 18:01:38.498684  256536 system_pods.go:89] "storage-provisioner" [8924f7ce-85a0-4587-9c05-8a74c7113e9e] Running
	I0920 18:01:38.498690  256536 system_pods.go:126] duration metric: took 208.521056ms to wait for k8s-apps to be running ...
	I0920 18:01:38.498697  256536 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:01:38.498743  256536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:01:38.514029  256536 system_svc.go:56] duration metric: took 15.320471ms WaitForService to wait for kubelet
	I0920 18:01:38.514065  256536 kubeadm.go:582] duration metric: took 23.693509389s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:01:38.514086  256536 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:01:38.685544  256536 request.go:632] Waited for 171.353571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I0920 18:01:38.685619  256536 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I0920 18:01:38.685624  256536 round_trippers.go:469] Request Headers:
	I0920 18:01:38.685632  256536 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:01:38.685636  256536 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:01:38.690050  256536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:01:38.691008  256536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:01:38.691029  256536 node_conditions.go:123] node cpu capacity is 2
	I0920 18:01:38.691041  256536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:01:38.691045  256536 node_conditions.go:123] node cpu capacity is 2
	I0920 18:01:38.691049  256536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:01:38.691051  256536 node_conditions.go:123] node cpu capacity is 2
	I0920 18:01:38.691055  256536 node_conditions.go:105] duration metric: took 176.963396ms to run NodePressure ...
	I0920 18:01:38.691067  256536 start.go:241] waiting for startup goroutines ...
	I0920 18:01:38.691085  256536 start.go:255] writing updated cluster config ...
	I0920 18:01:38.691394  256536 ssh_runner.go:195] Run: rm -f paused
	I0920 18:01:38.746142  256536 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:01:38.748440  256536 out.go:177] * Done! kubectl is now configured to use "ha-347193" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:05:31 ha-347193 crio[669]: time="2024-09-20 18:05:31.970378116Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855531970346161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d912841f-ce67-45b9-a4da-19192de9954d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:31 ha-347193 crio[669]: time="2024-09-20 18:05:31.971546094Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d73347a8-c6d8-419f-9a45-e27cb396107c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:31 ha-347193 crio[669]: time="2024-09-20 18:05:31.971618784Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d73347a8-c6d8-419f-9a45-e27cb396107c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:31 ha-347193 crio[669]: time="2024-09-20 18:05:31.971932719Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855304216814938,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f54f7a5f2c32d6bc0ec6ee35174b79a19554688494c5a052f414808aba6d5d3,PodSandboxId:1008d082466619b9dff1a593919ad42edc22d2689cb4c63ade9d89a2aa3d82cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855158873195435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158811750044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158740895692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54
cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172685514
7923720838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855146590131954,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3702c95ae17f31f21a9df60d65fe7c873d5a4a63c4bb0951d83c81da6fdcdcc9,PodSandboxId:79b3c32a6e6c014d62d3cf90229370a249daff625a451df26bc56b63f13b5011,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855137793174604,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40531f7fb6a94d470f366df1ed8127e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4,PodSandboxId:88ee68a7e316b7dd733350aa45479a511371c952904195167a88e9851da02e65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855135226742097,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855135139365214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855135143406129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09,PodSandboxId:a832aed299e3faf778cd7e1ebb68848a5f31d0f1bbd92c129bcc7511f62ef4df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855135082024585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d73347a8-c6d8-419f-9a45-e27cb396107c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.007983576Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d6035ea-78cd-488e-9b55-26ab05de55b1 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.008075816Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d6035ea-78cd-488e-9b55-26ab05de55b1 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.008986867Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f57ce2f8-4a26-4f86-9bed-de93bcfd146e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.009526437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855532009499136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f57ce2f8-4a26-4f86-9bed-de93bcfd146e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.010081777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b0f57a8-6b4a-4e16-a7ed-4c5a5bfd4ecf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.010143233Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b0f57a8-6b4a-4e16-a7ed-4c5a5bfd4ecf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.010463834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855304216814938,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f54f7a5f2c32d6bc0ec6ee35174b79a19554688494c5a052f414808aba6d5d3,PodSandboxId:1008d082466619b9dff1a593919ad42edc22d2689cb4c63ade9d89a2aa3d82cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855158873195435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158811750044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158740895692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54
cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172685514
7923720838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855146590131954,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3702c95ae17f31f21a9df60d65fe7c873d5a4a63c4bb0951d83c81da6fdcdcc9,PodSandboxId:79b3c32a6e6c014d62d3cf90229370a249daff625a451df26bc56b63f13b5011,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855137793174604,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40531f7fb6a94d470f366df1ed8127e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4,PodSandboxId:88ee68a7e316b7dd733350aa45479a511371c952904195167a88e9851da02e65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855135226742097,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855135139365214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855135143406129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09,PodSandboxId:a832aed299e3faf778cd7e1ebb68848a5f31d0f1bbd92c129bcc7511f62ef4df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855135082024585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b0f57a8-6b4a-4e16-a7ed-4c5a5bfd4ecf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.049686955Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e0e2a71-c064-4b14-a273-d63a7486dddc name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.049798847Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e0e2a71-c064-4b14-a273-d63a7486dddc name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.056060963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9986a6ef-6112-488c-a244-d8594e1ab930 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.056555826Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855532056526049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9986a6ef-6112-488c-a244-d8594e1ab930 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.057314644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54fa2a17-c45a-4254-a2c9-45242a803d8f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.057388932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54fa2a17-c45a-4254-a2c9-45242a803d8f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.058086574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855304216814938,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f54f7a5f2c32d6bc0ec6ee35174b79a19554688494c5a052f414808aba6d5d3,PodSandboxId:1008d082466619b9dff1a593919ad42edc22d2689cb4c63ade9d89a2aa3d82cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855158873195435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158811750044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158740895692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54
cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172685514
7923720838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855146590131954,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3702c95ae17f31f21a9df60d65fe7c873d5a4a63c4bb0951d83c81da6fdcdcc9,PodSandboxId:79b3c32a6e6c014d62d3cf90229370a249daff625a451df26bc56b63f13b5011,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855137793174604,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40531f7fb6a94d470f366df1ed8127e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4,PodSandboxId:88ee68a7e316b7dd733350aa45479a511371c952904195167a88e9851da02e65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855135226742097,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855135139365214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855135143406129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09,PodSandboxId:a832aed299e3faf778cd7e1ebb68848a5f31d0f1bbd92c129bcc7511f62ef4df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855135082024585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54fa2a17-c45a-4254-a2c9-45242a803d8f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.094991756Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a959c86b-6195-4a69-b59d-c99b9ec64e59 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.095082673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a959c86b-6195-4a69-b59d-c99b9ec64e59 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.096728876Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce223f3e-bdf8-4dba-9476-a7bc5f433b88 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.097149719Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855532097128004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce223f3e-bdf8-4dba-9476-a7bc5f433b88 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.097671242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29d4150e-22c1-4dde-a611-ba03e63aa67c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.097750153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29d4150e-22c1-4dde-a611-ba03e63aa67c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:05:32 ha-347193 crio[669]: time="2024-09-20 18:05:32.097981297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855304216814938,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f54f7a5f2c32d6bc0ec6ee35174b79a19554688494c5a052f414808aba6d5d3,PodSandboxId:1008d082466619b9dff1a593919ad42edc22d2689cb4c63ade9d89a2aa3d82cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855158873195435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158811750044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855158740895692,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54
cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172685514
7923720838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855146590131954,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3702c95ae17f31f21a9df60d65fe7c873d5a4a63c4bb0951d83c81da6fdcdcc9,PodSandboxId:79b3c32a6e6c014d62d3cf90229370a249daff625a451df26bc56b63f13b5011,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855137793174604,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40531f7fb6a94d470f366df1ed8127e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4,PodSandboxId:88ee68a7e316b7dd733350aa45479a511371c952904195167a88e9851da02e65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855135226742097,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855135139365214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855135143406129,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09,PodSandboxId:a832aed299e3faf778cd7e1ebb68848a5f31d0f1bbd92c129bcc7511f62ef4df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855135082024585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29d4150e-22c1-4dde-a611-ba03e63aa67c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	24d13f339c817       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   d56c4fb5022a4       busybox-7dff88458-vv8nw
	6f54f7a5f2c32       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   1008d08246661       storage-provisioner
	998d6fb086954       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   cfb097797b519       coredns-7c65d6cfc9-6llmd
	4980eee34ad3b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   503157b6402f3       coredns-7c65d6cfc9-bkmhn
	54d750519756c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   d420593f085b4       kube-proxy-rdqkg
	ebfa9fcdc2495       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   db50f6f39d94c       kindnet-z24zp
	3702c95ae17f3       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   79b3c32a6e6c0       kube-vip-ha-347193
	dce6ebcdcfa25       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   88ee68a7e316b       kube-apiserver-ha-347193
	b9e6f76c6e332       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   3b399285f0a3e       etcd-ha-347193
	6cae0975e4bde       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   6700b91af83d5       kube-scheduler-ha-347193
	5db95e41c4eee       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   a832aed299e3f       kube-controller-manager-ha-347193
	
	
	==> coredns [4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01] <==
	[INFO] 10.244.1.2:54565 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.005838401s
	[INFO] 10.244.2.2:51366 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000199485s
	[INFO] 10.244.0.4:36108 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000120747s
	[INFO] 10.244.0.4:52405 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000754716s
	[INFO] 10.244.0.4:39912 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001939354s
	[INFO] 10.244.1.2:35811 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004936568s
	[INFO] 10.244.1.2:36016 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003046132s
	[INFO] 10.244.1.2:34653 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170016s
	[INFO] 10.244.1.2:59470 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145491s
	[INFO] 10.244.2.2:50581 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001424335s
	[INFO] 10.244.2.2:53657 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087743s
	[INFO] 10.244.0.4:45468 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002017081s
	[INFO] 10.244.0.4:50151 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148946s
	[INFO] 10.244.0.4:51594 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101915s
	[INFO] 10.244.0.4:54414 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114937s
	[INFO] 10.244.1.2:38701 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218522s
	[INFO] 10.244.1.2:41853 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000182128s
	[INFO] 10.244.2.2:48909 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169464s
	[INFO] 10.244.0.4:55409 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111385s
	[INFO] 10.244.1.2:58822 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137575s
	[INFO] 10.244.2.2:55178 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124535s
	[INFO] 10.244.2.2:44350 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150664s
	[INFO] 10.244.0.4:57962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114195s
	[INFO] 10.244.0.4:56551 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094805s
	[INFO] 10.244.0.4:45171 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054433s
	
	
	==> coredns [998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5] <==
	[INFO] 10.244.1.2:55559 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000283442s
	[INFO] 10.244.2.2:33784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188859s
	[INFO] 10.244.2.2:58215 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00186989s
	[INFO] 10.244.2.2:52774 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099748s
	[INFO] 10.244.2.2:38149 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001158s
	[INFO] 10.244.2.2:42221 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000113646s
	[INFO] 10.244.2.2:49599 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173465s
	[INFO] 10.244.0.4:60750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180138s
	[INFO] 10.244.0.4:46666 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171665s
	[INFO] 10.244.0.4:52002 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001444571s
	[INFO] 10.244.0.4:45151 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006024s
	[INFO] 10.244.1.2:34989 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195829s
	[INFO] 10.244.1.2:34116 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087145s
	[INFO] 10.244.2.2:41553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124108s
	[INFO] 10.244.2.2:35637 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116822s
	[INFO] 10.244.2.2:34355 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111835s
	[INFO] 10.244.0.4:48848 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165085s
	[INFO] 10.244.0.4:49930 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082351s
	[INFO] 10.244.0.4:35945 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077731s
	[INFO] 10.244.1.2:37666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145796s
	[INFO] 10.244.1.2:50941 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000259758s
	[INFO] 10.244.1.2:52591 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141872s
	[INFO] 10.244.2.2:39683 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141964s
	[INFO] 10.244.2.2:51672 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176831s
	[INFO] 10.244.0.4:58285 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000193464s
	
	
	==> describe nodes <==
	Name:               ha-347193
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_59_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:59:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:05:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:02:04 +0000   Fri, 20 Sep 2024 17:59:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:02:04 +0000   Fri, 20 Sep 2024 17:59:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:02:04 +0000   Fri, 20 Sep 2024 17:59:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:02:04 +0000   Fri, 20 Sep 2024 17:59:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-347193
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 24c3d61093c44fc4b2898b98b4bdbc70
	  System UUID:                24c3d610-93c4-4fc4-b289-8b98b4bdbc70
	  Boot ID:                    5638bfe2-e986-4137-9385-e18b7e4b519b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vv8nw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 coredns-7c65d6cfc9-6llmd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m26s
	  kube-system                 coredns-7c65d6cfc9-bkmhn             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m26s
	  kube-system                 etcd-ha-347193                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m31s
	  kube-system                 kindnet-z24zp                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m27s
	  kube-system                 kube-apiserver-ha-347193             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-ha-347193    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-proxy-rdqkg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-scheduler-ha-347193             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-vip-ha-347193                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m24s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m38s (x7 over 6m38s)  kubelet          Node ha-347193 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m38s (x8 over 6m38s)  kubelet          Node ha-347193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m38s (x8 over 6m38s)  kubelet          Node ha-347193 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m31s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m31s                  kubelet          Node ha-347193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s                  kubelet          Node ha-347193 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s                  kubelet          Node ha-347193 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m27s                  node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Normal  NodeReady                6m14s                  kubelet          Node ha-347193 status is now: NodeReady
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	
	
	Name:               ha-347193-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_59_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:59:56 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:02:50 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 18:01:58 +0000   Fri, 20 Sep 2024 18:03:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 18:01:58 +0000   Fri, 20 Sep 2024 18:03:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 18:01:58 +0000   Fri, 20 Sep 2024 18:03:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 18:01:58 +0000   Fri, 20 Sep 2024 18:03:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-347193-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 325a97217aeb4c8f9cb24edad597fd25
	  System UUID:                325a9721-7aeb-4c8f-9cb2-4edad597fd25
	  Boot ID:                    bc33abb6-f61b-42e2-af43-631d2ede4061
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-85fk6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-347193-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m34s
	  kube-system                 kindnet-cqbxl                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m36s
	  kube-system                 kube-apiserver-ha-347193-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-controller-manager-ha-347193-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-proxy-ffdvq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-scheduler-ha-347193-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-vip-ha-347193-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m36s (x8 over 5m36s)  kubelet          Node ha-347193-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m36s (x8 over 5m36s)  kubelet          Node ha-347193-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m36s (x7 over 5m36s)  kubelet          Node ha-347193-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m32s                  node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  NodeNotReady             2m2s                   node-controller  Node ha-347193-m02 status is now: NodeNotReady
	
	
	Name:               ha-347193-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_01_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:01:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:05:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:02:13 +0000   Fri, 20 Sep 2024 18:01:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:02:13 +0000   Fri, 20 Sep 2024 18:01:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:02:13 +0000   Fri, 20 Sep 2024 18:01:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:02:13 +0000   Fri, 20 Sep 2024 18:01:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-347193-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 987815694814485e84522bfda359ab42
	  System UUID:                98781569-4814-485e-8452-2bfda359ab42
	  Boot ID:                    fc58e56d-3ed2-412a-b9e5-cb7d5fb81d74
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p824h                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-347193-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kindnet-5msnk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m21s
	  kube-system                 kube-apiserver-ha-347193-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-ha-347193-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-pccxp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-scheduler-ha-347193-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-vip-ha-347193-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m21s (x8 over 4m21s)  kubelet          Node ha-347193-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s (x8 over 4m21s)  kubelet          Node ha-347193-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s (x7 over 4m21s)  kubelet          Node ha-347193-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-347193-m03 event: Registered Node ha-347193-m03 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-347193-m03 event: Registered Node ha-347193-m03 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-347193-m03 event: Registered Node ha-347193-m03 in Controller
	
	
	Name:               ha-347193-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_02_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:02:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:05:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:02:49 +0000   Fri, 20 Sep 2024 18:02:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:02:49 +0000   Fri, 20 Sep 2024 18:02:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:02:49 +0000   Fri, 20 Sep 2024 18:02:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:02:49 +0000   Fri, 20 Sep 2024 18:02:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    ha-347193-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 36beb0176a7e4c449ee02f4adaf970e8
	  System UUID:                36beb017-6a7e-4c44-9ee0-2f4adaf970e8
	  Boot ID:                    347456dd-4ba6-4d92-bdee-958017f6c085
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-t5f94       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m14s
	  kube-system                 kube-proxy-gtwzd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m9s                   kube-proxy       
	  Normal  CIDRAssignmentFailed     3m14s                  cidrAllocator    Node ha-347193-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m14s (x2 over 3m14s)  kubelet          Node ha-347193-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s (x2 over 3m14s)  kubelet          Node ha-347193-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s (x2 over 3m14s)  kubelet          Node ha-347193-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node ha-347193-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep20 17:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051116] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037930] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.768779] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.874615] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.547112] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.314105] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.055929] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059483] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.173430] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.132192] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.252987] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.876503] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +5.009721] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.059883] kauditd_printk_skb: 158 callbacks suppressed
	[Sep20 17:59] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.095619] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.048443] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.211237] kauditd_printk_skb: 38 callbacks suppressed
	[Sep20 18:00] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d] <==
	{"level":"warn","ts":"2024-09-20T18:05:32.258692Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.346479Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.354226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.356143Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.359426Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.374469Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.381668Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.388503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.392950Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.396253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.402468Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.408562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.415123Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.419138Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.422974Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.431485Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.437873Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.443956Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.448449Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.451871Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.455974Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.456151Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.464062Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.470507Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:05:32.524215Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"7525355198545e9d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:05:32 up 7 min,  0 users,  load average: 0.14, 0.25, 0.14
	Linux ha-347193 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5] <==
	I0920 18:04:57.662486       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:05:07.652020       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:05:07.652151       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:05:07.652431       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:05:07.652468       1 main.go:299] handling current node
	I0920 18:05:07.652492       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:05:07.652521       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:05:07.652622       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:05:07.652641       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:05:17.653044       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:05:17.653093       1 main.go:299] handling current node
	I0920 18:05:17.653117       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:05:17.653124       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:05:17.653356       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:05:17.653380       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:05:17.653452       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:05:17.653459       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:05:27.659917       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:05:27.660082       1 main.go:299] handling current node
	I0920 18:05:27.660132       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:05:27.660155       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:05:27.660434       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:05:27.660474       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:05:27.660547       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:05:27.660567       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4] <==
	I0920 17:59:00.040430       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0920 17:59:00.047869       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.246]
	I0920 17:59:00.048849       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 17:59:00.053986       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 17:59:00.261870       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 17:59:01.502885       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 17:59:01.521849       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0920 17:59:01.592823       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 17:59:05.362201       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0920 17:59:05.964721       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0920 18:01:45.789676       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35350: use of closed network connection
	E0920 18:01:45.987221       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35370: use of closed network connection
	E0920 18:01:46.203136       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35382: use of closed network connection
	E0920 18:01:46.410018       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35390: use of closed network connection
	E0920 18:01:46.596914       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35412: use of closed network connection
	E0920 18:01:46.785733       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35422: use of closed network connection
	E0920 18:01:46.963707       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35432: use of closed network connection
	E0920 18:01:47.352644       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35476: use of closed network connection
	E0920 18:01:47.677101       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35494: use of closed network connection
	E0920 18:01:47.852966       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35506: use of closed network connection
	E0920 18:01:48.037422       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35530: use of closed network connection
	E0920 18:01:48.215519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35552: use of closed network connection
	E0920 18:01:48.395158       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35562: use of closed network connection
	E0920 18:01:48.571105       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35590: use of closed network connection
	W0920 18:03:10.061403       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.246 192.168.39.250]
	
	
	==> kube-controller-manager [5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09] <==
	I0920 18:02:18.858784       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:18.864042       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:18.983762       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	E0920 18:02:18.993581       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"f4761ca4-6943-48ac-a03b-0da33530a65b\", ResourceVersion:\"914\", Generation:1, CreationTimestamp:time.Date(2024, time.September, 20, 17, 59, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0025004a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\
", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource
)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00281e400), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0024c5650), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolum
eSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVo
lumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0024c5668), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtua
lDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.31.1\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc0025004e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Res
ourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\
"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0026a90e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002a3e7a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002939a00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Host
Alias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002991a60)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002a3e800)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfille
d on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0920 18:02:19.395187       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:19.617626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:19.719171       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:19.749178       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:20.458068       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-347193-m04"
	I0920 18:02:20.458612       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:20.582907       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:28.989089       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:38.284793       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-347193-m04"
	I0920 18:02:38.284953       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:38.304718       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:39.566872       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:02:49.271755       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:03:30.485154       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-347193-m04"
	I0920 18:03:30.485472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	I0920 18:03:30.507411       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	I0920 18:03:30.648029       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="83.389173ms"
	I0920 18:03:30.648163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.297µs"
	I0920 18:03:34.617103       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	I0920 18:03:35.794110       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	
	
	==> kube-proxy [54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:59:08.146402       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:59:08.169465       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.246"]
	E0920 17:59:08.169636       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:59:08.200549       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:59:08.200672       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:59:08.200715       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:59:08.203687       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:59:08.204074       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:59:08.204250       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:59:08.207892       1 config.go:199] "Starting service config controller"
	I0920 17:59:08.208388       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:59:08.208680       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:59:08.211000       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:59:08.208820       1 config.go:328] "Starting node config controller"
	I0920 17:59:08.211110       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:59:08.308818       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:59:08.311223       1 shared_informer.go:320] Caches are synced for node config
	I0920 17:59:08.311448       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f] <==
	W0920 17:58:59.136078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 17:58:59.136125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.152907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 17:58:59.152970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.232222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:58:59.232522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.417181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 17:58:59.417310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.425477       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 17:58:59.426116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.425550       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:58:59.426253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.487540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:58:59.487590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.537813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:58:59.537936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.543453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:58:59.543567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.650341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:58:59.650386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 17:59:01.377349       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 18:02:18.846875       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-t5f94\": pod kindnet-t5f94 is already assigned to node \"ha-347193-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-t5f94" node="ha-347193-m04"
	E0920 18:02:18.847041       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 33dab94e-9da4-4a58-83f6-a7a351c8c216(kube-system/kindnet-t5f94) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-t5f94"
	E0920 18:02:18.847081       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-t5f94\": pod kindnet-t5f94 is already assigned to node \"ha-347193-m04\"" pod="kube-system/kindnet-t5f94"
	I0920 18:02:18.847108       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-t5f94" node="ha-347193-m04"
	
	
	==> kubelet <==
	Sep 20 18:04:01 ha-347193 kubelet[1310]: E0920 18:04:01.734604    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855441733710654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:01 ha-347193 kubelet[1310]: E0920 18:04:01.734645    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855441733710654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:11 ha-347193 kubelet[1310]: E0920 18:04:11.739024    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855451738567159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:11 ha-347193 kubelet[1310]: E0920 18:04:11.739486    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855451738567159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:21 ha-347193 kubelet[1310]: E0920 18:04:21.742025    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855461741705291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:21 ha-347193 kubelet[1310]: E0920 18:04:21.742425    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855461741705291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:31 ha-347193 kubelet[1310]: E0920 18:04:31.746728    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855471746051287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:31 ha-347193 kubelet[1310]: E0920 18:04:31.746767    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855471746051287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:41 ha-347193 kubelet[1310]: E0920 18:04:41.748167    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855481747854827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:41 ha-347193 kubelet[1310]: E0920 18:04:41.748208    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855481747854827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:51 ha-347193 kubelet[1310]: E0920 18:04:51.749843    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855491749438137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:04:51 ha-347193 kubelet[1310]: E0920 18:04:51.750125    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855491749438137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:01 ha-347193 kubelet[1310]: E0920 18:05:01.623246    1310 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:05:01 ha-347193 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:05:01 ha-347193 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:05:01 ha-347193 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:05:01 ha-347193 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:05:01 ha-347193 kubelet[1310]: E0920 18:05:01.752334    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855501751934283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:01 ha-347193 kubelet[1310]: E0920 18:05:01.752368    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855501751934283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:11 ha-347193 kubelet[1310]: E0920 18:05:11.756769    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855511755858702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:11 ha-347193 kubelet[1310]: E0920 18:05:11.757336    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855511755858702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:21 ha-347193 kubelet[1310]: E0920 18:05:21.759742    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855521759123751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:21 ha-347193 kubelet[1310]: E0920 18:05:21.759770    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855521759123751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:31 ha-347193 kubelet[1310]: E0920 18:05:31.761239    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855531760926488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:05:31 ha-347193 kubelet[1310]: E0920 18:05:31.761318    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855531760926488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-347193 -n ha-347193
helpers_test.go:261: (dbg) Run:  kubectl --context ha-347193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-347193 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-347193 -v=7 --alsologtostderr
E0920 18:07:29.487630  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-347193 -v=7 --alsologtostderr: exit status 82 (2m1.882812817s)

                                                
                                                
-- stdout --
	* Stopping node "ha-347193-m04"  ...
	* Stopping node "ha-347193-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:05:37.623597  261704 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:05:37.623860  261704 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:05:37.623874  261704 out.go:358] Setting ErrFile to fd 2...
	I0920 18:05:37.623879  261704 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:05:37.624082  261704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:05:37.624354  261704 out.go:352] Setting JSON to false
	I0920 18:05:37.624447  261704 mustload.go:65] Loading cluster: ha-347193
	I0920 18:05:37.624853  261704 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:05:37.624958  261704 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 18:05:37.625831  261704 mustload.go:65] Loading cluster: ha-347193
	I0920 18:05:37.626061  261704 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:05:37.626108  261704 stop.go:39] StopHost: ha-347193-m04
	I0920 18:05:37.626508  261704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:05:37.626560  261704 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:05:37.642858  261704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37909
	I0920 18:05:37.643520  261704 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:05:37.644259  261704 main.go:141] libmachine: Using API Version  1
	I0920 18:05:37.644298  261704 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:05:37.644648  261704 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:05:37.647289  261704 out.go:177] * Stopping node "ha-347193-m04"  ...
	I0920 18:05:37.649065  261704 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 18:05:37.649123  261704 main.go:141] libmachine: (ha-347193-m04) Calling .DriverName
	I0920 18:05:37.649423  261704 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 18:05:37.649463  261704 main.go:141] libmachine: (ha-347193-m04) Calling .GetSSHHostname
	I0920 18:05:37.652860  261704 main.go:141] libmachine: (ha-347193-m04) DBG | domain ha-347193-m04 has defined MAC address 52:54:00:51:d7:3c in network mk-ha-347193
	I0920 18:05:37.653630  261704 main.go:141] libmachine: (ha-347193-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:d7:3c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:02:03 +0000 UTC Type:0 Mac:52:54:00:51:d7:3c Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-347193-m04 Clientid:01:52:54:00:51:d7:3c}
	I0920 18:05:37.653673  261704 main.go:141] libmachine: (ha-347193-m04) DBG | domain ha-347193-m04 has defined IP address 192.168.39.234 and MAC address 52:54:00:51:d7:3c in network mk-ha-347193
	I0920 18:05:37.653853  261704 main.go:141] libmachine: (ha-347193-m04) Calling .GetSSHPort
	I0920 18:05:37.654115  261704 main.go:141] libmachine: (ha-347193-m04) Calling .GetSSHKeyPath
	I0920 18:05:37.654333  261704 main.go:141] libmachine: (ha-347193-m04) Calling .GetSSHUsername
	I0920 18:05:37.654528  261704 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m04/id_rsa Username:docker}
	I0920 18:05:37.747130  261704 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 18:05:37.803738  261704 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 18:05:37.857130  261704 main.go:141] libmachine: Stopping "ha-347193-m04"...
	I0920 18:05:37.857188  261704 main.go:141] libmachine: (ha-347193-m04) Calling .GetState
	I0920 18:05:37.859204  261704 main.go:141] libmachine: (ha-347193-m04) Calling .Stop
	I0920 18:05:37.862811  261704 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 0/120
	I0920 18:05:38.988908  261704 main.go:141] libmachine: (ha-347193-m04) Calling .GetState
	I0920 18:05:38.990425  261704 main.go:141] libmachine: Machine "ha-347193-m04" was stopped.
	I0920 18:05:38.990454  261704 stop.go:75] duration metric: took 1.341386323s to stop
	I0920 18:05:38.990485  261704 stop.go:39] StopHost: ha-347193-m03
	I0920 18:05:38.990886  261704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:05:38.990949  261704 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:05:39.007032  261704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43927
	I0920 18:05:39.007614  261704 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:05:39.008137  261704 main.go:141] libmachine: Using API Version  1
	I0920 18:05:39.008162  261704 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:05:39.008549  261704 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:05:39.011250  261704 out.go:177] * Stopping node "ha-347193-m03"  ...
	I0920 18:05:39.012954  261704 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 18:05:39.012991  261704 main.go:141] libmachine: (ha-347193-m03) Calling .DriverName
	I0920 18:05:39.013316  261704 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 18:05:39.013348  261704 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHHostname
	I0920 18:05:39.017450  261704 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:05:39.018131  261704 main.go:141] libmachine: (ha-347193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:1a:4c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:00:36 +0000 UTC Type:0 Mac:52:54:00:80:1a:4c Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-347193-m03 Clientid:01:52:54:00:80:1a:4c}
	I0920 18:05:39.018165  261704 main.go:141] libmachine: (ha-347193-m03) DBG | domain ha-347193-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:80:1a:4c in network mk-ha-347193
	I0920 18:05:39.018484  261704 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHPort
	I0920 18:05:39.018792  261704 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHKeyPath
	I0920 18:05:39.018996  261704 main.go:141] libmachine: (ha-347193-m03) Calling .GetSSHUsername
	I0920 18:05:39.019208  261704 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m03/id_rsa Username:docker}
	I0920 18:05:39.111135  261704 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 18:05:39.166424  261704 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 18:05:39.223948  261704 main.go:141] libmachine: Stopping "ha-347193-m03"...
	I0920 18:05:39.223978  261704 main.go:141] libmachine: (ha-347193-m03) Calling .GetState
	I0920 18:05:39.225733  261704 main.go:141] libmachine: (ha-347193-m03) Calling .Stop
	I0920 18:05:39.229887  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 0/120
	I0920 18:05:40.231486  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 1/120
	I0920 18:05:41.233328  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 2/120
	I0920 18:05:42.235446  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 3/120
	I0920 18:05:43.237302  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 4/120
	I0920 18:05:44.239520  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 5/120
	I0920 18:05:45.241266  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 6/120
	I0920 18:05:46.243390  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 7/120
	I0920 18:05:47.245339  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 8/120
	I0920 18:05:48.246904  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 9/120
	I0920 18:05:49.249490  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 10/120
	I0920 18:05:50.251502  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 11/120
	I0920 18:05:51.253145  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 12/120
	I0920 18:05:52.254587  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 13/120
	I0920 18:05:53.256385  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 14/120
	I0920 18:05:54.258894  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 15/120
	I0920 18:05:55.260177  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 16/120
	I0920 18:05:56.262048  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 17/120
	I0920 18:05:57.263504  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 18/120
	I0920 18:05:58.265262  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 19/120
	I0920 18:05:59.267691  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 20/120
	I0920 18:06:00.269689  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 21/120
	I0920 18:06:01.271780  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 22/120
	I0920 18:06:02.273813  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 23/120
	I0920 18:06:03.275794  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 24/120
	I0920 18:06:04.278088  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 25/120
	I0920 18:06:05.280230  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 26/120
	I0920 18:06:06.281595  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 27/120
	I0920 18:06:07.283541  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 28/120
	I0920 18:06:08.285199  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 29/120
	I0920 18:06:09.288403  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 30/120
	I0920 18:06:10.290265  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 31/120
	I0920 18:06:11.292438  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 32/120
	I0920 18:06:12.294270  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 33/120
	I0920 18:06:13.296509  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 34/120
	I0920 18:06:14.298641  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 35/120
	I0920 18:06:15.300169  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 36/120
	I0920 18:06:16.302083  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 37/120
	I0920 18:06:17.303801  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 38/120
	I0920 18:06:18.305568  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 39/120
	I0920 18:06:19.308175  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 40/120
	I0920 18:06:20.309968  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 41/120
	I0920 18:06:21.312463  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 42/120
	I0920 18:06:22.314334  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 43/120
	I0920 18:06:23.316197  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 44/120
	I0920 18:06:24.318349  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 45/120
	I0920 18:06:25.320426  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 46/120
	I0920 18:06:26.322746  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 47/120
	I0920 18:06:27.324133  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 48/120
	I0920 18:06:28.325817  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 49/120
	I0920 18:06:29.328150  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 50/120
	I0920 18:06:30.329786  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 51/120
	I0920 18:06:31.331618  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 52/120
	I0920 18:06:32.333055  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 53/120
	I0920 18:06:33.334511  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 54/120
	I0920 18:06:34.336387  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 55/120
	I0920 18:06:35.337924  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 56/120
	I0920 18:06:36.340109  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 57/120
	I0920 18:06:37.341635  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 58/120
	I0920 18:06:38.343536  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 59/120
	I0920 18:06:39.345397  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 60/120
	I0920 18:06:40.346907  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 61/120
	I0920 18:06:41.348414  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 62/120
	I0920 18:06:42.350003  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 63/120
	I0920 18:06:43.351857  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 64/120
	I0920 18:06:44.353680  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 65/120
	I0920 18:06:45.355955  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 66/120
	I0920 18:06:46.357388  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 67/120
	I0920 18:06:47.359053  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 68/120
	I0920 18:06:48.361173  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 69/120
	I0920 18:06:49.363096  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 70/120
	I0920 18:06:50.364788  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 71/120
	I0920 18:06:51.366778  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 72/120
	I0920 18:06:52.368311  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 73/120
	I0920 18:06:53.369599  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 74/120
	I0920 18:06:54.371713  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 75/120
	I0920 18:06:55.373210  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 76/120
	I0920 18:06:56.374739  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 77/120
	I0920 18:06:57.376256  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 78/120
	I0920 18:06:58.377947  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 79/120
	I0920 18:06:59.380205  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 80/120
	I0920 18:07:00.382121  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 81/120
	I0920 18:07:01.384057  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 82/120
	I0920 18:07:02.385469  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 83/120
	I0920 18:07:03.387125  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 84/120
	I0920 18:07:04.389580  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 85/120
	I0920 18:07:05.391147  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 86/120
	I0920 18:07:06.392833  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 87/120
	I0920 18:07:07.394551  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 88/120
	I0920 18:07:08.396099  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 89/120
	I0920 18:07:09.398595  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 90/120
	I0920 18:07:10.400139  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 91/120
	I0920 18:07:11.401650  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 92/120
	I0920 18:07:12.403408  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 93/120
	I0920 18:07:13.405122  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 94/120
	I0920 18:07:14.407155  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 95/120
	I0920 18:07:15.408711  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 96/120
	I0920 18:07:16.410214  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 97/120
	I0920 18:07:17.412033  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 98/120
	I0920 18:07:18.413502  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 99/120
	I0920 18:07:19.415400  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 100/120
	I0920 18:07:20.417142  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 101/120
	I0920 18:07:21.418617  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 102/120
	I0920 18:07:22.420461  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 103/120
	I0920 18:07:23.421878  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 104/120
	I0920 18:07:24.423477  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 105/120
	I0920 18:07:25.425020  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 106/120
	I0920 18:07:26.426451  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 107/120
	I0920 18:07:27.428748  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 108/120
	I0920 18:07:28.430322  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 109/120
	I0920 18:07:29.431965  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 110/120
	I0920 18:07:30.434329  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 111/120
	I0920 18:07:31.435661  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 112/120
	I0920 18:07:32.437079  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 113/120
	I0920 18:07:33.438446  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 114/120
	I0920 18:07:34.440284  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 115/120
	I0920 18:07:35.441851  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 116/120
	I0920 18:07:36.443153  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 117/120
	I0920 18:07:37.444690  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 118/120
	I0920 18:07:38.446708  261704 main.go:141] libmachine: (ha-347193-m03) Waiting for machine to stop 119/120
	I0920 18:07:39.447806  261704 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 18:07:39.447885  261704 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 18:07:39.449960  261704 out.go:201] 
	W0920 18:07:39.451389  261704 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 18:07:39.451405  261704 out.go:270] * 
	* 
	W0920 18:07:39.453808  261704 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:07:39.455149  261704 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-347193 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-347193 --wait=true -v=7 --alsologtostderr
E0920 18:07:57.195846  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:08:30.943013  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-347193 --wait=true -v=7 --alsologtostderr: (4m6.369346447s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-347193
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-347193 -n ha-347193
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-347193 logs -n 25: (1.890457444s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m02:/home/docker/cp-test_ha-347193-m03_ha-347193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m02 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m03_ha-347193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04:/home/docker/cp-test_ha-347193-m03_ha-347193-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m04 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m03_ha-347193-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-347193 cp testdata/cp-test.txt                                                | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3833348347/001/cp-test_ha-347193-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193:/home/docker/cp-test_ha-347193-m04_ha-347193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193 sudo cat                                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m04_ha-347193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m02:/home/docker/cp-test_ha-347193-m04_ha-347193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m02 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m04_ha-347193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03:/home/docker/cp-test_ha-347193-m04_ha-347193-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m03 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m04_ha-347193-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-347193 node stop m02 -v=7                                                     | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-347193 node start m02 -v=7                                                    | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-347193 -v=7                                                           | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-347193 -v=7                                                                | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-347193 --wait=true -v=7                                                    | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:07 UTC | 20 Sep 24 18:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-347193                                                                | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:07:39
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:07:39.503894  262197 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:07:39.504024  262197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:07:39.504033  262197 out.go:358] Setting ErrFile to fd 2...
	I0920 18:07:39.504037  262197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:07:39.504275  262197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:07:39.504887  262197 out.go:352] Setting JSON to false
	I0920 18:07:39.505896  262197 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6602,"bootTime":1726849057,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:07:39.506020  262197 start.go:139] virtualization: kvm guest
	I0920 18:07:39.508454  262197 out.go:177] * [ha-347193] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:07:39.510022  262197 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:07:39.510065  262197 notify.go:220] Checking for updates...
	I0920 18:07:39.512364  262197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:07:39.513870  262197 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:07:39.515227  262197 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:07:39.516549  262197 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:07:39.517938  262197 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:07:39.520032  262197 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:07:39.520168  262197 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:07:39.520853  262197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:07:39.520925  262197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:07:39.538153  262197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40871
	I0920 18:07:39.538609  262197 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:07:39.539327  262197 main.go:141] libmachine: Using API Version  1
	I0920 18:07:39.539356  262197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:07:39.539761  262197 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:07:39.540021  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:07:39.578042  262197 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:07:39.579472  262197 start.go:297] selected driver: kvm2
	I0920 18:07:39.579501  262197 start.go:901] validating driver "kvm2" against &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.234 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:07:39.579704  262197 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:07:39.580186  262197 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:07:39.580317  262197 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:07:39.596678  262197 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:07:39.597435  262197 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:07:39.597481  262197 cni.go:84] Creating CNI manager for ""
	I0920 18:07:39.597531  262197 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 18:07:39.597594  262197 start.go:340] cluster config:
	{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.234 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:07:39.597737  262197 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:07:39.599843  262197 out.go:177] * Starting "ha-347193" primary control-plane node in "ha-347193" cluster
	I0920 18:07:39.601132  262197 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:07:39.601199  262197 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:07:39.601211  262197 cache.go:56] Caching tarball of preloaded images
	I0920 18:07:39.601321  262197 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:07:39.601333  262197 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:07:39.601444  262197 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 18:07:39.601681  262197 start.go:360] acquireMachinesLock for ha-347193: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:07:39.601729  262197 start.go:364] duration metric: took 27.402µs to acquireMachinesLock for "ha-347193"
	I0920 18:07:39.601744  262197 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:07:39.601749  262197 fix.go:54] fixHost starting: 
	I0920 18:07:39.602028  262197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:07:39.602065  262197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:07:39.617626  262197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0920 18:07:39.618189  262197 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:07:39.618694  262197 main.go:141] libmachine: Using API Version  1
	I0920 18:07:39.618721  262197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:07:39.619089  262197 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:07:39.619281  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:07:39.619421  262197 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 18:07:39.621577  262197 fix.go:112] recreateIfNeeded on ha-347193: state=Running err=<nil>
	W0920 18:07:39.621629  262197 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:07:39.624120  262197 out.go:177] * Updating the running kvm2 "ha-347193" VM ...
	I0920 18:07:39.625648  262197 machine.go:93] provisionDockerMachine start ...
	I0920 18:07:39.625685  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:07:39.626013  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:07:39.629148  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:39.629675  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:07:39.629732  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:39.629932  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:07:39.630138  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:39.630314  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:39.630438  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:07:39.630660  262197 main.go:141] libmachine: Using SSH client type: native
	I0920 18:07:39.630880  262197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:07:39.630892  262197 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:07:39.743087  262197 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-347193
	
	I0920 18:07:39.743119  262197 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 18:07:39.743431  262197 buildroot.go:166] provisioning hostname "ha-347193"
	I0920 18:07:39.743464  262197 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 18:07:39.743663  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:07:39.747258  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:39.747630  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:07:39.747662  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:39.747802  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:07:39.748018  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:39.748170  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:39.748283  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:07:39.748472  262197 main.go:141] libmachine: Using SSH client type: native
	I0920 18:07:39.748703  262197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:07:39.748721  262197 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-347193 && echo "ha-347193" | sudo tee /etc/hostname
	I0920 18:07:39.886045  262197 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-347193
	
	I0920 18:07:39.886084  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:07:39.889057  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:39.889417  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:07:39.889447  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:39.889682  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:07:39.889929  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:39.890168  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:39.890340  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:07:39.890523  262197 main.go:141] libmachine: Using SSH client type: native
	I0920 18:07:39.890729  262197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:07:39.890752  262197 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-347193' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-347193/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-347193' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:07:40.003602  262197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:07:40.003645  262197 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 18:07:40.003683  262197 buildroot.go:174] setting up certificates
	I0920 18:07:40.003701  262197 provision.go:84] configureAuth start
	I0920 18:07:40.003719  262197 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 18:07:40.004065  262197 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 18:07:40.007905  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.008489  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:07:40.008513  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.008782  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:07:40.011199  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.011622  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:07:40.011669  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.011859  262197 provision.go:143] copyHostCerts
	I0920 18:07:40.011894  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:07:40.011940  262197 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 18:07:40.011958  262197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:07:40.012029  262197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 18:07:40.012119  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:07:40.012140  262197 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 18:07:40.012144  262197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:07:40.012169  262197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 18:07:40.012209  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:07:40.012230  262197 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 18:07:40.012236  262197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:07:40.012258  262197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 18:07:40.012307  262197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.ha-347193 san=[127.0.0.1 192.168.39.246 ha-347193 localhost minikube]
	I0920 18:07:40.212632  262197 provision.go:177] copyRemoteCerts
	I0920 18:07:40.212709  262197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:07:40.212738  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:07:40.215770  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.216077  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:07:40.216115  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.216351  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:07:40.216614  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:40.216763  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:07:40.216919  262197 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 18:07:40.301135  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:07:40.301229  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:07:40.329452  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:07:40.329557  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0920 18:07:40.355961  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:07:40.356055  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:07:40.383019  262197 provision.go:87] duration metric: took 379.301144ms to configureAuth
	I0920 18:07:40.383051  262197 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:07:40.383287  262197 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:07:40.383390  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:07:40.386239  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.386599  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:07:40.386623  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.386823  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:07:40.387107  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:40.387403  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:40.387553  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:07:40.387802  262197 main.go:141] libmachine: Using SSH client type: native
	I0920 18:07:40.387984  262197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:07:40.388000  262197 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:09:11.317242  262197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:09:11.317283  262197 machine.go:96] duration metric: took 1m31.691605401s to provisionDockerMachine
	I0920 18:09:11.317296  262197 start.go:293] postStartSetup for "ha-347193" (driver="kvm2")
	I0920 18:09:11.317316  262197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:09:11.317334  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:09:11.317647  262197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:09:11.317682  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:09:11.322426  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.323312  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:09:11.323345  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.323526  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:09:11.323822  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:09:11.324068  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:09:11.324294  262197 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 18:09:11.414958  262197 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:09:11.419701  262197 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:09:11.419731  262197 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 18:09:11.419827  262197 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 18:09:11.419942  262197 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 18:09:11.419956  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /etc/ssl/certs/2448492.pem
	I0920 18:09:11.420066  262197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:09:11.430425  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:09:11.456605  262197 start.go:296] duration metric: took 139.280961ms for postStartSetup
	I0920 18:09:11.456680  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:09:11.457054  262197 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0920 18:09:11.457090  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:09:11.461168  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.461727  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:09:11.461753  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.462065  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:09:11.462333  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:09:11.462561  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:09:11.462847  262197 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	W0920 18:09:11.549106  262197 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0920 18:09:11.549137  262197 fix.go:56] duration metric: took 1m31.947388158s for fixHost
	I0920 18:09:11.549162  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:09:11.552992  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.553487  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:09:11.553517  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.553693  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:09:11.553985  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:09:11.554242  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:09:11.554421  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:09:11.554622  262197 main.go:141] libmachine: Using SSH client type: native
	I0920 18:09:11.554821  262197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:09:11.554834  262197 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:09:11.667314  262197 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855751.637791068
	
	I0920 18:09:11.667347  262197 fix.go:216] guest clock: 1726855751.637791068
	I0920 18:09:11.667355  262197 fix.go:229] Guest: 2024-09-20 18:09:11.637791068 +0000 UTC Remote: 2024-09-20 18:09:11.549145056 +0000 UTC m=+92.083014358 (delta=88.646012ms)
	I0920 18:09:11.667405  262197 fix.go:200] guest clock delta is within tolerance: 88.646012ms
	I0920 18:09:11.667412  262197 start.go:83] releasing machines lock for "ha-347193", held for 1m32.065673058s
	I0920 18:09:11.667442  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:09:11.667781  262197 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 18:09:11.670992  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.671441  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:09:11.671463  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.671634  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:09:11.672410  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:09:11.672636  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:09:11.672746  262197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:09:11.672823  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:09:11.672856  262197 ssh_runner.go:195] Run: cat /version.json
	I0920 18:09:11.672880  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:09:11.675595  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.675935  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.676071  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:09:11.676097  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.676266  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:09:11.676334  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:09:11.676354  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.676443  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:09:11.676507  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:09:11.676732  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:09:11.676733  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:09:11.676903  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:09:11.676903  262197 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 18:09:11.677037  262197 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 18:09:11.755449  262197 ssh_runner.go:195] Run: systemctl --version
	I0920 18:09:11.796592  262197 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:09:11.954669  262197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:09:11.964068  262197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:09:11.964160  262197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:09:11.973433  262197 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 18:09:11.973469  262197 start.go:495] detecting cgroup driver to use...
	I0920 18:09:11.973543  262197 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:09:11.990922  262197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:09:12.005474  262197 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:09:12.005537  262197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:09:12.019735  262197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:09:12.034120  262197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:09:12.189059  262197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:09:12.341563  262197 docker.go:233] disabling docker service ...
	I0920 18:09:12.341637  262197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:09:12.359710  262197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:09:12.373603  262197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:09:12.533281  262197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:09:12.679480  262197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:09:12.695167  262197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:09:12.714826  262197 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:09:12.714895  262197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:09:12.725596  262197 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:09:12.725686  262197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:09:12.736797  262197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:09:12.747995  262197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:09:12.758767  262197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:09:12.773849  262197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:09:12.805140  262197 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:09:12.819784  262197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:09:12.832323  262197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:09:12.843665  262197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:09:12.854322  262197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:09:13.004279  262197 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:09:13.293885  262197 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:09:13.293997  262197 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:09:13.300376  262197 start.go:563] Will wait 60s for crictl version
	I0920 18:09:13.300447  262197 ssh_runner.go:195] Run: which crictl
	I0920 18:09:13.304402  262197 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:09:13.342669  262197 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:09:13.342747  262197 ssh_runner.go:195] Run: crio --version
	I0920 18:09:13.373347  262197 ssh_runner.go:195] Run: crio --version
	I0920 18:09:13.405881  262197 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:09:13.407531  262197 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 18:09:13.410513  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:13.410936  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:09:13.410958  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:13.411271  262197 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:09:13.416559  262197 kubeadm.go:883] updating cluster {Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.234 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:09:13.416862  262197 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:09:13.416945  262197 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:09:13.462552  262197 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:09:13.462589  262197 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:09:13.462672  262197 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:09:13.511112  262197 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:09:13.511148  262197 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:09:13.511159  262197 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.31.1 crio true true} ...
	I0920 18:09:13.511278  262197 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-347193 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:09:13.511365  262197 ssh_runner.go:195] Run: crio config
	I0920 18:09:13.562219  262197 cni.go:84] Creating CNI manager for ""
	I0920 18:09:13.562245  262197 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 18:09:13.562258  262197 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:09:13.562282  262197 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-347193 NodeName:ha-347193 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:09:13.562439  262197 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-347193"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:09:13.562460  262197 kube-vip.go:115] generating kube-vip config ...
	I0920 18:09:13.562505  262197 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:09:13.574725  262197 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:09:13.574846  262197 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:09:13.574922  262197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:09:13.585533  262197 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:09:13.585621  262197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 18:09:13.595171  262197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 18:09:13.612730  262197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:09:13.630200  262197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 18:09:13.647994  262197 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:09:13.666720  262197 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:09:13.672026  262197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:09:13.824146  262197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:09:13.840231  262197 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193 for IP: 192.168.39.246
	I0920 18:09:13.840267  262197 certs.go:194] generating shared ca certs ...
	I0920 18:09:13.840289  262197 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:09:13.840449  262197 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 18:09:13.840486  262197 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 18:09:13.840494  262197 certs.go:256] generating profile certs ...
	I0920 18:09:13.840562  262197 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key
	I0920 18:09:13.840591  262197 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.5ff8df21
	I0920 18:09:13.840634  262197 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.5ff8df21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.241 192.168.39.250 192.168.39.254]
	I0920 18:09:14.011532  262197 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.5ff8df21 ...
	I0920 18:09:14.011569  262197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.5ff8df21: {Name:mk87d5be8d22deba5ad64b8a99e8620b7d2383e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:09:14.011757  262197 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.5ff8df21 ...
	I0920 18:09:14.011770  262197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.5ff8df21: {Name:mk3a705862b56533dea29ea23f7e721858c5ac92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:09:14.011841  262197 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.5ff8df21 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt
	I0920 18:09:14.011984  262197 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.5ff8df21 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key
	I0920 18:09:14.012113  262197 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key
	I0920 18:09:14.012132  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:09:14.012145  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:09:14.012155  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:09:14.012165  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:09:14.012174  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:09:14.012184  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:09:14.012200  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:09:14.012211  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:09:14.012285  262197 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 18:09:14.012318  262197 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 18:09:14.012327  262197 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:09:14.012347  262197 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:09:14.012367  262197 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:09:14.012388  262197 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 18:09:14.012425  262197 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:09:14.012450  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem -> /usr/share/ca-certificates/244849.pem
	I0920 18:09:14.012463  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /usr/share/ca-certificates/2448492.pem
	I0920 18:09:14.012476  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:09:14.013034  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:09:14.039002  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:09:14.064810  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:09:14.089078  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:09:14.113592  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 18:09:14.138669  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:09:14.165128  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:09:14.191582  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:09:14.219089  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 18:09:14.246135  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 18:09:14.272148  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:09:14.297755  262197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:09:14.315945  262197 ssh_runner.go:195] Run: openssl version
	I0920 18:09:14.322891  262197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 18:09:14.334496  262197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 18:09:14.339723  262197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:09:14.339803  262197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 18:09:14.346098  262197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 18:09:14.357108  262197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 18:09:14.368897  262197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 18:09:14.373459  262197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:09:14.373527  262197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 18:09:14.379325  262197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:09:14.389168  262197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:09:14.400425  262197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:09:14.404876  262197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:09:14.404958  262197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:09:14.410506  262197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:09:14.421884  262197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:09:14.426729  262197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:09:14.432598  262197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:09:14.439146  262197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:09:14.445085  262197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:09:14.451248  262197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:09:14.457219  262197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:09:14.463553  262197 kubeadm.go:392] StartCluster: {Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.234 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:09:14.463740  262197 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:09:14.463809  262197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:09:14.510760  262197 cri.go:89] found id: "e00c1c1a07d23b0e7e743bc72a4b8cb588d77f337f2a6d47c89ebe4b153b85cd"
	I0920 18:09:14.510788  262197 cri.go:89] found id: "315821263cc3b7bd2a478cb35322982eda4a845f9fc5b8086022daec034a1460"
	I0920 18:09:14.510792  262197 cri.go:89] found id: "c6f3217d6efc41512b1e0ce34c3d0a20836e299bfc6a4f9f41c15168f43b3366"
	I0920 18:09:14.510795  262197 cri.go:89] found id: "6f54f7a5f2c32d6bc0ec6ee35174b79a19554688494c5a052f414808aba6d5d3"
	I0920 18:09:14.510798  262197 cri.go:89] found id: "998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5"
	I0920 18:09:14.510801  262197 cri.go:89] found id: "4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01"
	I0920 18:09:14.510806  262197 cri.go:89] found id: "54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9"
	I0920 18:09:14.510808  262197 cri.go:89] found id: "ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5"
	I0920 18:09:14.510810  262197 cri.go:89] found id: "3702c95ae17f31f21a9df60d65fe7c873d5a4a63c4bb0951d83c81da6fdcdcc9"
	I0920 18:09:14.510815  262197 cri.go:89] found id: "dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4"
	I0920 18:09:14.510818  262197 cri.go:89] found id: "b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d"
	I0920 18:09:14.510820  262197 cri.go:89] found id: "6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f"
	I0920 18:09:14.510823  262197 cri.go:89] found id: "5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09"
	I0920 18:09:14.510826  262197 cri.go:89] found id: ""
	I0920 18:09:14.510892  262197 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.579876776Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3adec81-7215-4496-977e-f030b9cb11b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.580569871Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a32639c7e7c0be6723028368216a23bf1bb33fbcb99b430ed46adf2e1ea4e331,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855839618059722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf83a94c43f015b6bd1ac29c32b1c20d057598c03b801cd5b210ddc46cd83d7f,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855801629029619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aaef813076b4a046a50b286bf9e3353a117fb39496ae588ea8966417cc50dc,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855801613530695,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913c5022617727dacc8da0546cc89a522d28cdb23e4cfeffe185e1ed86ddf24e,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726855795610657675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86283c7ec4b8b76ad57646f8ede71f8fbd2488da1cdc5e493d7e2e3981b503,PodSandboxId:01ad1d593334bd4f609544ef6f23ed3429b6ad565f373acf5cb64daf6fc99cf6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855792920406648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab0f9006f2cb385c7ebb9a923435abda5a700ecf229499c129912b006a9c348,PodSandboxId:1cb5129054927c6f3931cf21cd9e4e1401868fb3a6b5181918e84c0f803cad17,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855774324549826,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8015b973cd7ad8950a83e2c6acba07ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d376e9770a85a40d5a2f2a68e191b487630df8377fcfcb26b99623bdf354431a,PodSandboxId:5e0f11d03a2a1c6055b2a886ed8284a020be5770487b95a28f5f489e7cfd5757,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855760023951931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e4e60e2dd4d595c0dba3467787581b6abb97b4ea3dc50c7556910dea480a591c,PodSandboxId:b9937d468b42f5b8c251f10008f20fb114c4b810d43836929fdf9e36428b8708,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759751836563,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6758e679045b017c3f1bb18c6052183ee469e16e30b035c3a54985570a4731f,PodSandboxId:0b4488f4dcb2291f5eb49780da9bfcb337b9a164a3168977698df5c75b65fb0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855759684501493,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb34dd2113dfdecc52e4e844500d8e47d56b75ddd0f12a74b7babec6601a732,PodSandboxId:7b094de6a3f1dec8c2b782fd9c3a5654d9d40297fecf6d49728073f2c4e1db91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855759520928633,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c46db4ca04a2f60a9e34d715c267fd8cfabc0d403c54d77c65f4c3a60a54e315,PodSandboxId:71332909fe6bc0909cff841b788d7cb263b84fb5b02e0134879011618e5ee2a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759600999590,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9adca152c5d4c155cc64086c8ecb73c1c77d7152d7a3749d702f64c98f89e7e,PodSandboxId:fe4d3c58e8ee5c064f06fd55412e29e823053f7b801edac843e6f5836a9baab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855759453448071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154
fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f71a8c2fd0f7fa3427392665a6a4fe2a1aebe4cacfcabb32935483b98ad88ba1,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855759448735001,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6994ccac0eaf33e08af026f534ceacfa702115682033809428acc70f8d685d8e,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855759320053700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726855304216934251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158811909217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158741057313,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855147923732386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726855146590192031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855135139539095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726855135143537875,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3adec81-7215-4496-977e-f030b9cb11b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.610229997Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=a673c037-31ce-4301-9fc4-e4213eb63f12 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.610357102Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a673c037-31ce-4301-9fc4-e4213eb63f12 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.630189703Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66bf9901-2c9a-4bf4-94a1-79dd0c7ada1c name=/runtime.v1.RuntimeService/Version
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.630316039Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66bf9901-2c9a-4bf4-94a1-79dd0c7ada1c name=/runtime.v1.RuntimeService/Version
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.631465214Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2513d5b8-a491-45e1-a6bf-aa8df694dc4b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.631905698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855906631866700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2513d5b8-a491-45e1-a6bf-aa8df694dc4b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.632627710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcc39107-a58e-4594-aec4-88575653ce79 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.632696056Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcc39107-a58e-4594-aec4-88575653ce79 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.634544960Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a32639c7e7c0be6723028368216a23bf1bb33fbcb99b430ed46adf2e1ea4e331,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855839618059722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf83a94c43f015b6bd1ac29c32b1c20d057598c03b801cd5b210ddc46cd83d7f,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855801629029619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aaef813076b4a046a50b286bf9e3353a117fb39496ae588ea8966417cc50dc,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855801613530695,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913c5022617727dacc8da0546cc89a522d28cdb23e4cfeffe185e1ed86ddf24e,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726855795610657675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86283c7ec4b8b76ad57646f8ede71f8fbd2488da1cdc5e493d7e2e3981b503,PodSandboxId:01ad1d593334bd4f609544ef6f23ed3429b6ad565f373acf5cb64daf6fc99cf6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855792920406648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab0f9006f2cb385c7ebb9a923435abda5a700ecf229499c129912b006a9c348,PodSandboxId:1cb5129054927c6f3931cf21cd9e4e1401868fb3a6b5181918e84c0f803cad17,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855774324549826,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8015b973cd7ad8950a83e2c6acba07ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d376e9770a85a40d5a2f2a68e191b487630df8377fcfcb26b99623bdf354431a,PodSandboxId:5e0f11d03a2a1c6055b2a886ed8284a020be5770487b95a28f5f489e7cfd5757,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855760023951931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e4e60e2dd4d595c0dba3467787581b6abb97b4ea3dc50c7556910dea480a591c,PodSandboxId:b9937d468b42f5b8c251f10008f20fb114c4b810d43836929fdf9e36428b8708,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759751836563,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6758e679045b017c3f1bb18c6052183ee469e16e30b035c3a54985570a4731f,PodSandboxId:0b4488f4dcb2291f5eb49780da9bfcb337b9a164a3168977698df5c75b65fb0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855759684501493,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb34dd2113dfdecc52e4e844500d8e47d56b75ddd0f12a74b7babec6601a732,PodSandboxId:7b094de6a3f1dec8c2b782fd9c3a5654d9d40297fecf6d49728073f2c4e1db91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855759520928633,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c46db4ca04a2f60a9e34d715c267fd8cfabc0d403c54d77c65f4c3a60a54e315,PodSandboxId:71332909fe6bc0909cff841b788d7cb263b84fb5b02e0134879011618e5ee2a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759600999590,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9adca152c5d4c155cc64086c8ecb73c1c77d7152d7a3749d702f64c98f89e7e,PodSandboxId:fe4d3c58e8ee5c064f06fd55412e29e823053f7b801edac843e6f5836a9baab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855759453448071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154
fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f71a8c2fd0f7fa3427392665a6a4fe2a1aebe4cacfcabb32935483b98ad88ba1,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855759448735001,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6994ccac0eaf33e08af026f534ceacfa702115682033809428acc70f8d685d8e,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855759320053700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726855304216934251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158811909217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158741057313,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855147923732386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726855146590192031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855135139539095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726855135143537875,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcc39107-a58e-4594-aec4-88575653ce79 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.692780886Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56b0a97f-4eca-4859-b2ac-7b08dbc6e707 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.692861914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56b0a97f-4eca-4859-b2ac-7b08dbc6e707 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.694015755Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6571b76-680a-4518-9a93-bfbc05284b11 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.694713059Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855906694630179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6571b76-680a-4518-9a93-bfbc05284b11 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.695788675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33fa2c57-17d4-4ae2-9040-3761fb879ecb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.695868120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33fa2c57-17d4-4ae2-9040-3761fb879ecb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.696521873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a32639c7e7c0be6723028368216a23bf1bb33fbcb99b430ed46adf2e1ea4e331,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855839618059722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf83a94c43f015b6bd1ac29c32b1c20d057598c03b801cd5b210ddc46cd83d7f,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855801629029619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aaef813076b4a046a50b286bf9e3353a117fb39496ae588ea8966417cc50dc,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855801613530695,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913c5022617727dacc8da0546cc89a522d28cdb23e4cfeffe185e1ed86ddf24e,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726855795610657675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86283c7ec4b8b76ad57646f8ede71f8fbd2488da1cdc5e493d7e2e3981b503,PodSandboxId:01ad1d593334bd4f609544ef6f23ed3429b6ad565f373acf5cb64daf6fc99cf6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855792920406648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab0f9006f2cb385c7ebb9a923435abda5a700ecf229499c129912b006a9c348,PodSandboxId:1cb5129054927c6f3931cf21cd9e4e1401868fb3a6b5181918e84c0f803cad17,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855774324549826,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8015b973cd7ad8950a83e2c6acba07ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d376e9770a85a40d5a2f2a68e191b487630df8377fcfcb26b99623bdf354431a,PodSandboxId:5e0f11d03a2a1c6055b2a886ed8284a020be5770487b95a28f5f489e7cfd5757,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855760023951931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e4e60e2dd4d595c0dba3467787581b6abb97b4ea3dc50c7556910dea480a591c,PodSandboxId:b9937d468b42f5b8c251f10008f20fb114c4b810d43836929fdf9e36428b8708,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759751836563,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6758e679045b017c3f1bb18c6052183ee469e16e30b035c3a54985570a4731f,PodSandboxId:0b4488f4dcb2291f5eb49780da9bfcb337b9a164a3168977698df5c75b65fb0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855759684501493,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb34dd2113dfdecc52e4e844500d8e47d56b75ddd0f12a74b7babec6601a732,PodSandboxId:7b094de6a3f1dec8c2b782fd9c3a5654d9d40297fecf6d49728073f2c4e1db91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855759520928633,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c46db4ca04a2f60a9e34d715c267fd8cfabc0d403c54d77c65f4c3a60a54e315,PodSandboxId:71332909fe6bc0909cff841b788d7cb263b84fb5b02e0134879011618e5ee2a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759600999590,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9adca152c5d4c155cc64086c8ecb73c1c77d7152d7a3749d702f64c98f89e7e,PodSandboxId:fe4d3c58e8ee5c064f06fd55412e29e823053f7b801edac843e6f5836a9baab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855759453448071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154
fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f71a8c2fd0f7fa3427392665a6a4fe2a1aebe4cacfcabb32935483b98ad88ba1,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855759448735001,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6994ccac0eaf33e08af026f534ceacfa702115682033809428acc70f8d685d8e,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855759320053700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726855304216934251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158811909217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158741057313,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855147923732386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726855146590192031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855135139539095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726855135143537875,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33fa2c57-17d4-4ae2-9040-3761fb879ecb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.752605778Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e7edb91d-7e1b-46b4-af7a-55a8a8d59657 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.752729600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e7edb91d-7e1b-46b4-af7a-55a8a8d59657 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.754230360Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3dec6ac2-0fe9-4d3e-bfd6-4e52e4e4c331 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.755374232Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855906755336615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3dec6ac2-0fe9-4d3e-bfd6-4e52e4e4c331 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.756502880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d5e719f-1922-46e2-b184-7f1b89b67f96 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.756562370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d5e719f-1922-46e2-b184-7f1b89b67f96 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:11:46 ha-347193 crio[3597]: time="2024-09-20 18:11:46.757028729Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a32639c7e7c0be6723028368216a23bf1bb33fbcb99b430ed46adf2e1ea4e331,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855839618059722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf83a94c43f015b6bd1ac29c32b1c20d057598c03b801cd5b210ddc46cd83d7f,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855801629029619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aaef813076b4a046a50b286bf9e3353a117fb39496ae588ea8966417cc50dc,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855801613530695,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913c5022617727dacc8da0546cc89a522d28cdb23e4cfeffe185e1ed86ddf24e,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726855795610657675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86283c7ec4b8b76ad57646f8ede71f8fbd2488da1cdc5e493d7e2e3981b503,PodSandboxId:01ad1d593334bd4f609544ef6f23ed3429b6ad565f373acf5cb64daf6fc99cf6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855792920406648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab0f9006f2cb385c7ebb9a923435abda5a700ecf229499c129912b006a9c348,PodSandboxId:1cb5129054927c6f3931cf21cd9e4e1401868fb3a6b5181918e84c0f803cad17,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855774324549826,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8015b973cd7ad8950a83e2c6acba07ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d376e9770a85a40d5a2f2a68e191b487630df8377fcfcb26b99623bdf354431a,PodSandboxId:5e0f11d03a2a1c6055b2a886ed8284a020be5770487b95a28f5f489e7cfd5757,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855760023951931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e4e60e2dd4d595c0dba3467787581b6abb97b4ea3dc50c7556910dea480a591c,PodSandboxId:b9937d468b42f5b8c251f10008f20fb114c4b810d43836929fdf9e36428b8708,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759751836563,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6758e679045b017c3f1bb18c6052183ee469e16e30b035c3a54985570a4731f,PodSandboxId:0b4488f4dcb2291f5eb49780da9bfcb337b9a164a3168977698df5c75b65fb0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855759684501493,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb34dd2113dfdecc52e4e844500d8e47d56b75ddd0f12a74b7babec6601a732,PodSandboxId:7b094de6a3f1dec8c2b782fd9c3a5654d9d40297fecf6d49728073f2c4e1db91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855759520928633,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c46db4ca04a2f60a9e34d715c267fd8cfabc0d403c54d77c65f4c3a60a54e315,PodSandboxId:71332909fe6bc0909cff841b788d7cb263b84fb5b02e0134879011618e5ee2a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759600999590,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9adca152c5d4c155cc64086c8ecb73c1c77d7152d7a3749d702f64c98f89e7e,PodSandboxId:fe4d3c58e8ee5c064f06fd55412e29e823053f7b801edac843e6f5836a9baab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855759453448071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154
fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f71a8c2fd0f7fa3427392665a6a4fe2a1aebe4cacfcabb32935483b98ad88ba1,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855759448735001,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6994ccac0eaf33e08af026f534ceacfa702115682033809428acc70f8d685d8e,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855759320053700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726855304216934251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158811909217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158741057313,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855147923732386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726855146590192031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855135139539095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726855135143537875,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d5e719f-1922-46e2-b184-7f1b89b67f96 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a32639c7e7c0b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   7632b2b5457be       storage-provisioner
	bf83a94c43f01       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   87436c1b35bb9       kube-controller-manager-ha-347193
	31aaef813076b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            3                   4d10a800b45c8       kube-apiserver-ha-347193
	913c502261772       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   7632b2b5457be       storage-provisioner
	1b86283c7ec4b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   01ad1d593334b       busybox-7dff88458-vv8nw
	dab0f9006f2cb       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   1cb5129054927       kube-vip-ha-347193
	d376e9770a85a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   5e0f11d03a2a1       kube-proxy-rdqkg
	e4e60e2dd4d59       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   b9937d468b42f       coredns-7c65d6cfc9-6llmd
	f6758e679045b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   0b4488f4dcb22       kube-scheduler-ha-347193
	c46db4ca04a2f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   71332909fe6bc       coredns-7c65d6cfc9-bkmhn
	afb34dd2113df       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   7b094de6a3f1d       kindnet-z24zp
	e9adca152c5d4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   fe4d3c58e8ee5       etcd-ha-347193
	f71a8c2fd0f7f       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   4d10a800b45c8       kube-apiserver-ha-347193
	6994ccac0eaf3       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   87436c1b35bb9       kube-controller-manager-ha-347193
	24d13f339c817       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   d56c4fb5022a4       busybox-7dff88458-vv8nw
	998d6fb086954       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   cfb097797b519       coredns-7c65d6cfc9-6llmd
	4980eee34ad3b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   503157b6402f3       coredns-7c65d6cfc9-bkmhn
	54d750519756c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      12 minutes ago       Exited              kube-proxy                0                   d420593f085b4       kube-proxy-rdqkg
	ebfa9fcdc2495       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      12 minutes ago       Exited              kindnet-cni               0                   db50f6f39d94c       kindnet-z24zp
	b9e6f76c6e332       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      12 minutes ago       Exited              etcd                      0                   3b399285f0a3e       etcd-ha-347193
	6cae0975e4bde       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      12 minutes ago       Exited              kube-scheduler            0                   6700b91af83d5       kube-scheduler-ha-347193
	
	
	==> coredns [4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01] <==
	[INFO] 10.244.1.2:35811 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004936568s
	[INFO] 10.244.1.2:36016 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003046132s
	[INFO] 10.244.1.2:34653 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170016s
	[INFO] 10.244.1.2:59470 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145491s
	[INFO] 10.244.2.2:50581 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001424335s
	[INFO] 10.244.2.2:53657 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087743s
	[INFO] 10.244.0.4:45468 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002017081s
	[INFO] 10.244.0.4:50151 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148946s
	[INFO] 10.244.0.4:51594 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101915s
	[INFO] 10.244.0.4:54414 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114937s
	[INFO] 10.244.1.2:38701 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218522s
	[INFO] 10.244.1.2:41853 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000182128s
	[INFO] 10.244.2.2:48909 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169464s
	[INFO] 10.244.0.4:55409 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111385s
	[INFO] 10.244.1.2:58822 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137575s
	[INFO] 10.244.2.2:55178 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124535s
	[INFO] 10.244.2.2:44350 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150664s
	[INFO] 10.244.0.4:57962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114195s
	[INFO] 10.244.0.4:56551 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094805s
	[INFO] 10.244.0.4:45171 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054433s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1809&timeout=6m58s&timeoutSeconds=418&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1758&timeout=5m40s&timeoutSeconds=340&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1762&timeout=9m40s&timeoutSeconds=580&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5] <==
	[INFO] 10.244.2.2:38149 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001158s
	[INFO] 10.244.2.2:42221 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000113646s
	[INFO] 10.244.2.2:49599 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173465s
	[INFO] 10.244.0.4:60750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180138s
	[INFO] 10.244.0.4:46666 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171665s
	[INFO] 10.244.0.4:52002 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001444571s
	[INFO] 10.244.0.4:45151 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006024s
	[INFO] 10.244.1.2:34989 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195829s
	[INFO] 10.244.1.2:34116 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087145s
	[INFO] 10.244.2.2:41553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124108s
	[INFO] 10.244.2.2:35637 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116822s
	[INFO] 10.244.2.2:34355 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111835s
	[INFO] 10.244.0.4:48848 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165085s
	[INFO] 10.244.0.4:49930 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082351s
	[INFO] 10.244.0.4:35945 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077731s
	[INFO] 10.244.1.2:37666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145796s
	[INFO] 10.244.1.2:50941 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000259758s
	[INFO] 10.244.1.2:52591 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141872s
	[INFO] 10.244.2.2:39683 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141964s
	[INFO] 10.244.2.2:51672 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176831s
	[INFO] 10.244.0.4:58285 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000193464s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	
	
	==> coredns [c46db4ca04a2f60a9e34d715c267fd8cfabc0d403c54d77c65f4c3a60a54e315] <==
	[INFO] plugin/kubernetes: Trace[391238245]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 18:09:25.010) (total time: 10001ms):
	Trace[391238245]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:09:35.011)
	Trace[391238245]: [10.001526491s] [10.001526491s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e4e60e2dd4d595c0dba3467787581b6abb97b4ea3dc50c7556910dea480a591c] <==
	Trace[991853822]: [10.00142678s] [10.00142678s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59252->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59252->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59240->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1914598391]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 18:09:31.325) (total time: 10379ms):
	Trace[1914598391]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59240->10.96.0.1:443: read: connection reset by peer 10379ms (18:09:41.704)
	Trace[1914598391]: [10.379690737s] [10.379690737s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59240->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-347193
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_59_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:59:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:11:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:10:04 +0000   Fri, 20 Sep 2024 17:59:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:10:04 +0000   Fri, 20 Sep 2024 17:59:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:10:04 +0000   Fri, 20 Sep 2024 17:59:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:10:04 +0000   Fri, 20 Sep 2024 17:59:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-347193
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 24c3d61093c44fc4b2898b98b4bdbc70
	  System UUID:                24c3d610-93c4-4fc4-b289-8b98b4bdbc70
	  Boot ID:                    5638bfe2-e986-4137-9385-e18b7e4b519b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vv8nw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-6llmd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-7c65d6cfc9-bkmhn             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-347193                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-z24zp                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-347193             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-347193    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-rdqkg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-347193             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-347193                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 104s                   kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-347193 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-347193 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-347193 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node ha-347193 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node ha-347193 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node ha-347193 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                    node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-347193 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Warning  ContainerGCFailed        2m46s (x2 over 3m46s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m37s (x3 over 3m26s)  kubelet          Node ha-347193 status is now: NodeNotReady
	  Normal   RegisteredNode           106s                   node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Normal   RegisteredNode           100s                   node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Normal   RegisteredNode           39s                    node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	
	
	Name:               ha-347193-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_59_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:11:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:10:46 +0000   Fri, 20 Sep 2024 18:10:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:10:46 +0000   Fri, 20 Sep 2024 18:10:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:10:46 +0000   Fri, 20 Sep 2024 18:10:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:10:46 +0000   Fri, 20 Sep 2024 18:10:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-347193-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 325a97217aeb4c8f9cb24edad597fd25
	  System UUID:                325a9721-7aeb-4c8f-9cb2-4edad597fd25
	  Boot ID:                    6084d441-850a-4962-98a2-2de79ae637fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-85fk6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-347193-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-cqbxl                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-347193-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-347193-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-ffdvq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-347193-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-347193-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 83s                    kube-proxy       
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-347193-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-347193-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-347193-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                    node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  NodeNotReady             8m17s                  node-controller  Node ha-347193-m02 status is now: NodeNotReady
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node ha-347193-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node ha-347193-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m10s)  kubelet          Node ha-347193-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           106s                   node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  RegisteredNode           100s                   node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  RegisteredNode           39s                    node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	
	
	Name:               ha-347193-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_01_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:01:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:11:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:11:20 +0000   Fri, 20 Sep 2024 18:10:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:11:20 +0000   Fri, 20 Sep 2024 18:10:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:11:20 +0000   Fri, 20 Sep 2024 18:10:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:11:20 +0000   Fri, 20 Sep 2024 18:10:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-347193-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 987815694814485e84522bfda359ab42
	  System UUID:                98781569-4814-485e-8452-2bfda359ab42
	  Boot ID:                    a7399bb2-ddb0-4203-9db8-4c97384d1c0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p824h                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-347193-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-5msnk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-347193-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-347193-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-pccxp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-347193-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-347193-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 39s                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-347193-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-347193-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-347193-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-347193-m03 event: Registered Node ha-347193-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-347193-m03 event: Registered Node ha-347193-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-347193-m03 event: Registered Node ha-347193-m03 in Controller
	  Normal   RegisteredNode           106s               node-controller  Node ha-347193-m03 event: Registered Node ha-347193-m03 in Controller
	  Normal   RegisteredNode           100s               node-controller  Node ha-347193-m03 event: Registered Node ha-347193-m03 in Controller
	  Normal   NodeNotReady             66s                node-controller  Node ha-347193-m03 status is now: NodeNotReady
	  Normal   Starting                 58s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  57s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 57s                kubelet          Node ha-347193-m03 has been rebooted, boot id: a7399bb2-ddb0-4203-9db8-4c97384d1c0c
	  Normal   NodeHasSufficientMemory  57s (x2 over 57s)  kubelet          Node ha-347193-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x2 over 57s)  kubelet          Node ha-347193-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x2 over 57s)  kubelet          Node ha-347193-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                57s                kubelet          Node ha-347193-m03 status is now: NodeReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-347193-m03 event: Registered Node ha-347193-m03 in Controller
	
	
	Name:               ha-347193-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_02_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:02:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:11:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:11:39 +0000   Fri, 20 Sep 2024 18:11:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:11:39 +0000   Fri, 20 Sep 2024 18:11:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:11:39 +0000   Fri, 20 Sep 2024 18:11:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:11:39 +0000   Fri, 20 Sep 2024 18:11:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    ha-347193-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 36beb0176a7e4c449ee02f4adaf970e8
	  System UUID:                36beb017-6a7e-4c44-9ee0-2f4adaf970e8
	  Boot ID:                    692f9cba-783e-41cb-9228-7e49d59be4d7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-t5f94       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m29s
	  kube-system                 kube-proxy-gtwzd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m23s                  kube-proxy       
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   CIDRAssignmentFailed     9m29s                  cidrAllocator    Node ha-347193-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  9m29s (x2 over 9m29s)  kubelet          Node ha-347193-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m29s (x2 over 9m29s)  kubelet          Node ha-347193-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m29s (x2 over 9m29s)  kubelet          Node ha-347193-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m28s                  node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal   RegisteredNode           9m28s                  node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal   RegisteredNode           9m27s                  node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal   NodeReady                9m9s                   kubelet          Node ha-347193-m04 status is now: NodeReady
	  Normal   RegisteredNode           106s                   node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal   RegisteredNode           100s                   node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal   NodeNotReady             66s                    node-controller  Node ha-347193-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           39s                    node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal   Starting                 8s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)        kubelet          Node ha-347193-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)        kubelet          Node ha-347193-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)        kubelet          Node ha-347193-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s (x2 over 8s)        kubelet          Node ha-347193-m04 has been rebooted, boot id: 692f9cba-783e-41cb-9228-7e49d59be4d7
	  Normal   NodeReady                8s (x2 over 8s)        kubelet          Node ha-347193-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.314105] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.055929] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059483] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.173430] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.132192] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.252987] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.876503] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +5.009721] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.059883] kauditd_printk_skb: 158 callbacks suppressed
	[Sep20 17:59] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.095619] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.048443] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.211237] kauditd_printk_skb: 38 callbacks suppressed
	[Sep20 18:00] kauditd_printk_skb: 24 callbacks suppressed
	[Sep20 18:09] systemd-fstab-generator[3521]: Ignoring "noauto" option for root device
	[  +0.157859] systemd-fstab-generator[3533]: Ignoring "noauto" option for root device
	[  +0.187478] systemd-fstab-generator[3547]: Ignoring "noauto" option for root device
	[  +0.150277] systemd-fstab-generator[3559]: Ignoring "noauto" option for root device
	[  +0.317882] systemd-fstab-generator[3587]: Ignoring "noauto" option for root device
	[  +0.825728] systemd-fstab-generator[3683]: Ignoring "noauto" option for root device
	[  +5.320469] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.057031] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.052446] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.831692] kauditd_printk_skb: 5 callbacks suppressed
	[Sep20 18:10] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d] <==
	2024/09/20 18:07:40 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 18:07:40 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-20T18:07:40.582958Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.246:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:07:40.583046Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.246:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T18:07:40.584582Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b19954eb16571c64","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-20T18:07:40.584786Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7525355198545e9d"}
	{"level":"info","ts":"2024-09-20T18:07:40.584819Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7525355198545e9d"}
	{"level":"info","ts":"2024-09-20T18:07:40.584845Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7525355198545e9d"}
	{"level":"info","ts":"2024-09-20T18:07:40.584914Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b19954eb16571c64","remote-peer-id":"7525355198545e9d"}
	{"level":"info","ts":"2024-09-20T18:07:40.584990Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"7525355198545e9d"}
	{"level":"info","ts":"2024-09-20T18:07:40.585055Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"7525355198545e9d"}
	{"level":"info","ts":"2024-09-20T18:07:40.585091Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7525355198545e9d"}
	{"level":"info","ts":"2024-09-20T18:07:40.585117Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:07:40.585150Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:07:40.585196Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:07:40.585353Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:07:40.585413Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:07:40.585467Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:07:40.585500Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:07:40.590619Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.246:2380"}
	{"level":"warn","ts":"2024-09-20T18:07:40.590725Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.845116918s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-20T18:07:40.590919Z","caller":"traceutil/trace.go:171","msg":"trace[514827633] range","detail":"{range_begin:; range_end:; }","duration":"8.845328468s","start":"2024-09-20T18:07:31.745581Z","end":"2024-09-20T18:07:40.590910Z","steps":["trace[514827633] 'agreement among raft nodes before linearized reading'  (duration: 8.845114802s)"],"step_count":1}
	{"level":"error","ts":"2024-09-20T18:07:40.590984Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-20T18:07:40.590889Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.246:2380"}
	{"level":"info","ts":"2024-09-20T18:07:40.591202Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-347193","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.246:2380"],"advertise-client-urls":["https://192.168.39.246:2379"]}
	
	
	==> etcd [e9adca152c5d4c155cc64086c8ecb73c1c77d7152d7a3749d702f64c98f89e7e] <==
	{"level":"warn","ts":"2024-09-20T18:10:44.169253Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.250:2380/version","remote-member-id":"c8ee87ebd06db0cf","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:10:44.169466Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c8ee87ebd06db0cf","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:10:45.455120Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8ee87ebd06db0cf","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:10:45.455167Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c8ee87ebd06db0cf","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:10:48.172007Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.250:2380/version","remote-member-id":"c8ee87ebd06db0cf","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:10:48.172059Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c8ee87ebd06db0cf","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:10:50.455212Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c8ee87ebd06db0cf","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:10:50.455447Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8ee87ebd06db0cf","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:10:52.174333Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.250:2380/version","remote-member-id":"c8ee87ebd06db0cf","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:10:52.174458Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c8ee87ebd06db0cf","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:10:55.455449Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c8ee87ebd06db0cf","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:10:55.455677Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8ee87ebd06db0cf","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:10:56.175833Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.250:2380/version","remote-member-id":"c8ee87ebd06db0cf","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:10:56.175892Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c8ee87ebd06db0cf","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:11:00.178395Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.250:2380/version","remote-member-id":"c8ee87ebd06db0cf","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:11:00.178524Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c8ee87ebd06db0cf","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:11:00.456061Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8ee87ebd06db0cf","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T18:11:00.456116Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c8ee87ebd06db0cf","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-20T18:11:02.068341Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:11:02.080059Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b19954eb16571c64","to":"c8ee87ebd06db0cf","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-20T18:11:02.080200Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:11:02.085211Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:11:02.089679Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b19954eb16571c64","to":"c8ee87ebd06db0cf","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-20T18:11:02.089795Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:11:02.098053Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	
	
	==> kernel <==
	 18:11:47 up 13 min,  0 users,  load average: 0.52, 0.43, 0.26
	Linux ha-347193 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [afb34dd2113dfdecc52e4e844500d8e47d56b75ddd0f12a74b7babec6601a732] <==
	I0920 18:11:10.814834       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:11:20.813685       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:11:20.813744       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:11:20.813891       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:11:20.813898       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:11:20.813953       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:11:20.813979       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:11:20.814037       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:11:20.814056       1 main.go:299] handling current node
	I0920 18:11:30.821770       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:11:30.821917       1 main.go:299] handling current node
	I0920 18:11:30.821947       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:11:30.821970       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:11:30.822250       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:11:30.822380       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:11:30.822586       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:11:30.822648       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:11:40.812762       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:11:40.813021       1 main.go:299] handling current node
	I0920 18:11:40.813073       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:11:40.813093       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:11:40.813245       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:11:40.813327       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:11:40.813438       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:11:40.813462       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5] <==
	I0920 18:07:17.652140       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:07:17.652233       1 main.go:299] handling current node
	I0920 18:07:17.652255       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:07:17.652260       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:07:17.652547       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:07:17.652607       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:07:17.652723       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:07:17.652742       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:07:27.660457       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:07:27.660547       1 main.go:299] handling current node
	I0920 18:07:27.660571       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:07:27.660578       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:07:27.660875       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:07:27.660936       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:07:27.661042       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:07:27.661061       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	E0920 18:07:28.584892       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1809&timeout=6m47s&timeoutSeconds=407&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0920 18:07:37.656105       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:07:37.656144       1 main.go:299] handling current node
	I0920 18:07:37.656159       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:07:37.656166       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:07:37.656438       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:07:37.656497       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:07:37.656601       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:07:37.656730       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [31aaef813076b4a046a50b286bf9e3353a117fb39496ae588ea8966417cc50dc] <==
	I0920 18:10:03.800850       1 controller.go:78] Starting OpenAPI AggregationController
	I0920 18:10:03.800904       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0920 18:10:03.886489       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 18:10:03.890675       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 18:10:03.891065       1 policy_source.go:224] refreshing policies
	I0920 18:10:03.893209       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 18:10:03.895245       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 18:10:03.895400       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 18:10:03.897113       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 18:10:03.897202       1 aggregator.go:171] initial CRD sync complete...
	I0920 18:10:03.897250       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 18:10:03.897335       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 18:10:03.897361       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:10:03.899922       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:10:03.900664       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 18:10:03.900689       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 18:10:03.900828       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 18:10:03.901878       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 18:10:03.917732       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0920 18:10:03.920770       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.241 192.168.39.250]
	I0920 18:10:03.931757       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 18:10:03.957803       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0920 18:10:03.970064       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0920 18:10:04.804674       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0920 18:10:05.484327       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.241 192.168.39.246 192.168.39.250]
	
	
	==> kube-apiserver [f71a8c2fd0f7fa3427392665a6a4fe2a1aebe4cacfcabb32935483b98ad88ba1] <==
	I0920 18:09:20.219405       1 options.go:228] external host was not specified, using 192.168.39.246
	I0920 18:09:20.223885       1 server.go:142] Version: v1.31.1
	I0920 18:09:20.223954       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:09:20.633059       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0920 18:09:20.673542       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0920 18:09:20.673628       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0920 18:09:20.673974       1 instance.go:232] Using reconciler: lease
	I0920 18:09:20.674609       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0920 18:09:40.629951       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0920 18:09:40.629957       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0920 18:09:40.675266       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [6994ccac0eaf33e08af026f534ceacfa702115682033809428acc70f8d685d8e] <==
	I0920 18:09:20.849588       1 serving.go:386] Generated self-signed cert in-memory
	I0920 18:09:21.063655       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0920 18:09:21.063751       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:09:21.074167       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0920 18:09:21.074413       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 18:09:21.074436       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 18:09:21.074460       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0920 18:09:41.685646       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.246:8443/healthz\": dial tcp 192.168.39.246:8443: connect: connection refused"
	
	
	==> kube-controller-manager [bf83a94c43f015b6bd1ac29c32b1c20d057598c03b801cd5b210ddc46cd83d7f] <==
	I0920 18:10:41.348070       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-347193-m04"
	I0920 18:10:41.348248       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m03"
	I0920 18:10:41.355539       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:10:41.372934       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m03"
	I0920 18:10:41.381788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:10:41.634894       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.083536ms"
	I0920 18:10:41.634994       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="55.899µs"
	I0920 18:10:42.260206       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m03"
	I0920 18:10:46.068951       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	I0920 18:10:46.706151       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m03"
	I0920 18:10:50.037579       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m03"
	I0920 18:10:50.061521       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m03"
	I0920 18:10:50.951624       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.728µs"
	I0920 18:10:51.646437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m03"
	I0920 18:10:52.342659       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:10:56.793730       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:11:08.288320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:11:08.385968       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:11:10.262434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.216416ms"
	I0920 18:11:10.262566       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.83µs"
	I0920 18:11:20.881435       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m03"
	I0920 18:11:39.272686       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-347193-m04"
	I0920 18:11:39.272924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:11:39.292719       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:11:41.671418       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	
	
	==> kube-proxy [54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9] <==
	E0920 18:06:29.899962       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:32.970670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:32.970879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:32.971051       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:32.971089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:36.041655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:36.041891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:39.114686       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:39.114934       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:39.115160       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:39.115461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:42.185608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:42.185761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:48.329358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:48.329510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:51.402655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:51.402870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:51.403170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:51.403258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:07:09.834636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:07:09.835903       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:07:12.906022       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:07:12.906761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:07:15.978251       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:07:15.978561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [d376e9770a85a40d5a2f2a68e191b487630df8377fcfcb26b99623bdf354431a] <==
	E0920 18:10:01.870389       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-347193\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0920 18:10:01.873367       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0920 18:10:01.873524       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:10:02.405899       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:10:02.407378       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:10:02.407473       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:10:02.447558       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:10:02.448173       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:10:02.448237       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:10:02.461661       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:10:02.461738       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:10:02.461783       1 config.go:199] "Starting service config controller"
	I0920 18:10:02.461802       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:10:02.462430       1 config.go:328] "Starting node config controller"
	I0920 18:10:02.462490       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0920 18:10:04.937162       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:10:04.938545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:10:04.938720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:10:04.938785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:10:04.938894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:10:04.938930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0920 18:10:04.939599       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0920 18:10:05.863404       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:10:06.062843       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:10:06.562881       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f] <==
	W0920 17:58:59.537813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:58:59.537936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.543453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:58:59.543567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.650341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:58:59.650386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 17:59:01.377349       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 18:02:18.846875       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-t5f94\": pod kindnet-t5f94 is already assigned to node \"ha-347193-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-t5f94" node="ha-347193-m04"
	E0920 18:02:18.847041       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 33dab94e-9da4-4a58-83f6-a7a351c8c216(kube-system/kindnet-t5f94) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-t5f94"
	E0920 18:02:18.847081       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-t5f94\": pod kindnet-t5f94 is already assigned to node \"ha-347193-m04\"" pod="kube-system/kindnet-t5f94"
	I0920 18:02:18.847108       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-t5f94" node="ha-347193-m04"
	E0920 18:07:30.959838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0920 18:07:31.931401       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0920 18:07:32.501085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0920 18:07:34.509511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0920 18:07:35.687201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0920 18:07:36.060126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0920 18:07:36.093018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0920 18:07:36.645207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0920 18:07:36.812022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0920 18:07:36.910109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0920 18:07:37.051528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0920 18:07:37.894179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0920 18:07:38.884995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0920 18:07:40.514451       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f6758e679045b017c3f1bb18c6052183ee469e16e30b035c3a54985570a4731f] <==
	W0920 18:09:58.498224       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.246:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:09:58.498331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.246:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:09:58.603779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.246:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:09:58.603928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.246:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:09:58.834196       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.246:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:09:58.834437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.246:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:09:59.459238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.246:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:09:59.459356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.246:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:09:59.556489       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.246:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:09:59.556548       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.246:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:10:00.059880       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.246:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:10:00.060057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.246:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:10:00.439995       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.246:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:10:00.440071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.246:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:10:01.422414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.246:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:10:01.422543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.246:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:10:01.529973       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.246:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:10:01.530020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.246:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:10:01.820086       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.246:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:10:01.820163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.246:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:10:03.826153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 18:10:03.826333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:10:03.826521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:10:03.826593       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0920 18:10:14.001364       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:10:31 ha-347193 kubelet[1310]: E0920 18:10:31.817867    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855831817396016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:10:31 ha-347193 kubelet[1310]: E0920 18:10:31.818307    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855831817396016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:10:39 ha-347193 kubelet[1310]: I0920 18:10:39.602659    1310 scope.go:117] "RemoveContainer" containerID="913c5022617727dacc8da0546cc89a522d28cdb23e4cfeffe185e1ed86ddf24e"
	Sep 20 18:10:41 ha-347193 kubelet[1310]: E0920 18:10:41.820757    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855841820104000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:10:41 ha-347193 kubelet[1310]: E0920 18:10:41.820833    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855841820104000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:10:51 ha-347193 kubelet[1310]: E0920 18:10:51.823626    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855851823132091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:10:51 ha-347193 kubelet[1310]: E0920 18:10:51.824007    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855851823132091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:11:01 ha-347193 kubelet[1310]: E0920 18:11:01.623548    1310 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:11:01 ha-347193 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:11:01 ha-347193 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:11:01 ha-347193 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:11:01 ha-347193 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:11:01 ha-347193 kubelet[1310]: E0920 18:11:01.826882    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855861826186275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:11:01 ha-347193 kubelet[1310]: E0920 18:11:01.826912    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855861826186275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:11:05 ha-347193 kubelet[1310]: I0920 18:11:05.599895    1310 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-347193" podUID="20d6faa4-600f-4bd0-8acb-1f95c047da58"
	Sep 20 18:11:05 ha-347193 kubelet[1310]: I0920 18:11:05.628263    1310 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-347193"
	Sep 20 18:11:11 ha-347193 kubelet[1310]: I0920 18:11:11.616995    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-347193" podStartSLOduration=6.61696956 podStartE2EDuration="6.61696956s" podCreationTimestamp="2024-09-20 18:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-20 18:11:11.616521208 +0000 UTC m=+730.156340705" watchObservedRunningTime="2024-09-20 18:11:11.61696956 +0000 UTC m=+730.156789056"
	Sep 20 18:11:11 ha-347193 kubelet[1310]: E0920 18:11:11.830491    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855871830034614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:11:11 ha-347193 kubelet[1310]: E0920 18:11:11.830518    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855871830034614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:11:21 ha-347193 kubelet[1310]: E0920 18:11:21.832101    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855881831829509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:11:21 ha-347193 kubelet[1310]: E0920 18:11:21.832129    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855881831829509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:11:31 ha-347193 kubelet[1310]: E0920 18:11:31.834690    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855891833841226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:11:31 ha-347193 kubelet[1310]: E0920 18:11:31.834902    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855891833841226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:11:41 ha-347193 kubelet[1310]: E0920 18:11:41.836676    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855901835983964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:11:41 ha-347193 kubelet[1310]: E0920 18:11:41.837438    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855901835983964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:11:46.275508  263544 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19679-237658/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-347193 -n ha-347193
helpers_test.go:261: (dbg) Run:  kubectl --context ha-347193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 stop -v=7 --alsologtostderr
E0920 18:12:29.487244  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:30.941821  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-347193 stop -v=7 --alsologtostderr: exit status 82 (2m0.511847515s)

                                                
                                                
-- stdout --
	* Stopping node "ha-347193-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:12:05.868886  263985 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:12:05.869036  263985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:05.869047  263985 out.go:358] Setting ErrFile to fd 2...
	I0920 18:12:05.869052  263985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:05.869266  263985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:12:05.869568  263985 out.go:352] Setting JSON to false
	I0920 18:12:05.869670  263985 mustload.go:65] Loading cluster: ha-347193
	I0920 18:12:05.870223  263985 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:12:05.870362  263985 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 18:12:05.870594  263985 mustload.go:65] Loading cluster: ha-347193
	I0920 18:12:05.870782  263985 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:12:05.870819  263985 stop.go:39] StopHost: ha-347193-m04
	I0920 18:12:05.871216  263985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:12:05.871260  263985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:12:05.888164  263985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34381
	I0920 18:12:05.888743  263985 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:12:05.889438  263985 main.go:141] libmachine: Using API Version  1
	I0920 18:12:05.889466  263985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:12:05.889848  263985 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:12:05.893324  263985 out.go:177] * Stopping node "ha-347193-m04"  ...
	I0920 18:12:05.894449  263985 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 18:12:05.894481  263985 main.go:141] libmachine: (ha-347193-m04) Calling .DriverName
	I0920 18:12:05.894740  263985 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 18:12:05.894771  263985 main.go:141] libmachine: (ha-347193-m04) Calling .GetSSHHostname
	I0920 18:12:05.898747  263985 main.go:141] libmachine: (ha-347193-m04) DBG | domain ha-347193-m04 has defined MAC address 52:54:00:51:d7:3c in network mk-ha-347193
	I0920 18:12:05.899281  263985 main.go:141] libmachine: (ha-347193-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:d7:3c", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 19:11:33 +0000 UTC Type:0 Mac:52:54:00:51:d7:3c Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-347193-m04 Clientid:01:52:54:00:51:d7:3c}
	I0920 18:12:05.899311  263985 main.go:141] libmachine: (ha-347193-m04) DBG | domain ha-347193-m04 has defined IP address 192.168.39.234 and MAC address 52:54:00:51:d7:3c in network mk-ha-347193
	I0920 18:12:05.899513  263985 main.go:141] libmachine: (ha-347193-m04) Calling .GetSSHPort
	I0920 18:12:05.899795  263985 main.go:141] libmachine: (ha-347193-m04) Calling .GetSSHKeyPath
	I0920 18:12:05.900037  263985 main.go:141] libmachine: (ha-347193-m04) Calling .GetSSHUsername
	I0920 18:12:05.900262  263985 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193-m04/id_rsa Username:docker}
	I0920 18:12:05.984414  263985 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 18:12:06.039224  263985 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 18:12:06.091865  263985 main.go:141] libmachine: Stopping "ha-347193-m04"...
	I0920 18:12:06.091932  263985 main.go:141] libmachine: (ha-347193-m04) Calling .GetState
	I0920 18:12:06.094222  263985 main.go:141] libmachine: (ha-347193-m04) Calling .Stop
	I0920 18:12:06.099332  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 0/120
	I0920 18:12:07.100851  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 1/120
	I0920 18:12:08.102509  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 2/120
	I0920 18:12:09.104621  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 3/120
	I0920 18:12:10.106578  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 4/120
	I0920 18:12:11.108895  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 5/120
	I0920 18:12:12.110501  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 6/120
	I0920 18:12:13.111812  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 7/120
	I0920 18:12:14.113649  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 8/120
	I0920 18:12:15.115746  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 9/120
	I0920 18:12:16.117620  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 10/120
	I0920 18:12:17.119458  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 11/120
	I0920 18:12:18.120988  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 12/120
	I0920 18:12:19.122325  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 13/120
	I0920 18:12:20.124717  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 14/120
	I0920 18:12:21.126234  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 15/120
	I0920 18:12:22.128936  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 16/120
	I0920 18:12:23.130403  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 17/120
	I0920 18:12:24.132213  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 18/120
	I0920 18:12:25.133578  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 19/120
	I0920 18:12:26.136237  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 20/120
	I0920 18:12:27.137633  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 21/120
	I0920 18:12:28.139226  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 22/120
	I0920 18:12:29.141462  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 23/120
	I0920 18:12:30.143188  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 24/120
	I0920 18:12:31.145677  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 25/120
	I0920 18:12:32.147569  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 26/120
	I0920 18:12:33.149375  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 27/120
	I0920 18:12:34.151186  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 28/120
	I0920 18:12:35.152653  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 29/120
	I0920 18:12:36.154709  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 30/120
	I0920 18:12:37.156373  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 31/120
	I0920 18:12:38.157783  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 32/120
	I0920 18:12:39.159284  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 33/120
	I0920 18:12:40.160630  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 34/120
	I0920 18:12:41.162938  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 35/120
	I0920 18:12:42.165192  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 36/120
	I0920 18:12:43.167544  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 37/120
	I0920 18:12:44.169500  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 38/120
	I0920 18:12:45.171557  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 39/120
	I0920 18:12:46.174283  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 40/120
	I0920 18:12:47.175967  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 41/120
	I0920 18:12:48.177736  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 42/120
	I0920 18:12:49.179311  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 43/120
	I0920 18:12:50.180834  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 44/120
	I0920 18:12:51.182387  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 45/120
	I0920 18:12:52.184525  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 46/120
	I0920 18:12:53.186448  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 47/120
	I0920 18:12:54.188537  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 48/120
	I0920 18:12:55.190401  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 49/120
	I0920 18:12:56.192544  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 50/120
	I0920 18:12:57.194228  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 51/120
	I0920 18:12:58.196949  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 52/120
	I0920 18:12:59.198615  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 53/120
	I0920 18:13:00.200449  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 54/120
	I0920 18:13:01.202805  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 55/120
	I0920 18:13:02.204793  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 56/120
	I0920 18:13:03.206667  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 57/120
	I0920 18:13:04.208520  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 58/120
	I0920 18:13:05.210005  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 59/120
	I0920 18:13:06.211839  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 60/120
	I0920 18:13:07.213352  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 61/120
	I0920 18:13:08.215399  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 62/120
	I0920 18:13:09.217033  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 63/120
	I0920 18:13:10.219056  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 64/120
	I0920 18:13:11.221289  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 65/120
	I0920 18:13:12.222841  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 66/120
	I0920 18:13:13.224501  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 67/120
	I0920 18:13:14.226034  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 68/120
	I0920 18:13:15.227910  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 69/120
	I0920 18:13:16.230426  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 70/120
	I0920 18:13:17.232115  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 71/120
	I0920 18:13:18.233747  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 72/120
	I0920 18:13:19.235282  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 73/120
	I0920 18:13:20.237033  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 74/120
	I0920 18:13:21.239324  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 75/120
	I0920 18:13:22.240821  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 76/120
	I0920 18:13:23.242341  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 77/120
	I0920 18:13:24.244069  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 78/120
	I0920 18:13:25.245507  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 79/120
	I0920 18:13:26.247038  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 80/120
	I0920 18:13:27.248679  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 81/120
	I0920 18:13:28.250401  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 82/120
	I0920 18:13:29.252624  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 83/120
	I0920 18:13:30.254856  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 84/120
	I0920 18:13:31.257054  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 85/120
	I0920 18:13:32.258775  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 86/120
	I0920 18:13:33.260610  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 87/120
	I0920 18:13:34.262194  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 88/120
	I0920 18:13:35.264587  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 89/120
	I0920 18:13:36.267100  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 90/120
	I0920 18:13:37.269123  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 91/120
	I0920 18:13:38.270589  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 92/120
	I0920 18:13:39.272523  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 93/120
	I0920 18:13:40.274157  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 94/120
	I0920 18:13:41.275976  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 95/120
	I0920 18:13:42.277463  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 96/120
	I0920 18:13:43.278946  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 97/120
	I0920 18:13:44.280625  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 98/120
	I0920 18:13:45.282125  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 99/120
	I0920 18:13:46.284291  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 100/120
	I0920 18:13:47.285735  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 101/120
	I0920 18:13:48.287522  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 102/120
	I0920 18:13:49.289795  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 103/120
	I0920 18:13:50.291451  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 104/120
	I0920 18:13:51.293681  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 105/120
	I0920 18:13:52.295945  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 106/120
	I0920 18:13:53.297703  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 107/120
	I0920 18:13:54.299301  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 108/120
	I0920 18:13:55.300858  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 109/120
	I0920 18:13:56.302992  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 110/120
	I0920 18:13:57.304636  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 111/120
	I0920 18:13:58.307679  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 112/120
	I0920 18:13:59.309126  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 113/120
	I0920 18:14:00.310944  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 114/120
	I0920 18:14:01.313437  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 115/120
	I0920 18:14:02.315069  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 116/120
	I0920 18:14:03.316689  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 117/120
	I0920 18:14:04.318511  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 118/120
	I0920 18:14:05.320064  263985 main.go:141] libmachine: (ha-347193-m04) Waiting for machine to stop 119/120
	I0920 18:14:06.320907  263985 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 18:14:06.321057  263985 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 18:14:06.323486  263985 out.go:201] 
	W0920 18:14:06.325255  263985 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 18:14:06.325280  263985 out.go:270] * 
	* 
	W0920 18:14:06.327716  263985 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:14:06.329511  263985 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-347193 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Done: out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr: (18.931120967s)
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr": 
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr": 
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-347193 -n ha-347193
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-347193 logs -n 25: (1.627756742s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-347193 ssh -n ha-347193-m02 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m03_ha-347193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04:/home/docker/cp-test_ha-347193-m03_ha-347193-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m04 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m03_ha-347193-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-347193 cp testdata/cp-test.txt                                                | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3833348347/001/cp-test_ha-347193-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193:/home/docker/cp-test_ha-347193-m04_ha-347193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193 sudo cat                                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m04_ha-347193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m02:/home/docker/cp-test_ha-347193-m04_ha-347193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m02 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m04_ha-347193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m03:/home/docker/cp-test_ha-347193-m04_ha-347193-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n                                                                 | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | ha-347193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-347193 ssh -n ha-347193-m03 sudo cat                                          | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC | 20 Sep 24 18:02 UTC |
	|         | /home/docker/cp-test_ha-347193-m04_ha-347193-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-347193 node stop m02 -v=7                                                     | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-347193 node start m02 -v=7                                                    | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-347193 -v=7                                                           | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-347193 -v=7                                                                | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-347193 --wait=true -v=7                                                    | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:07 UTC | 20 Sep 24 18:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-347193                                                                | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC |                     |
	| node    | ha-347193 node delete m03 -v=7                                                   | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:11 UTC | 20 Sep 24 18:12 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-347193 stop -v=7                                                              | ha-347193 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:07:39
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:07:39.503894  262197 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:07:39.504024  262197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:07:39.504033  262197 out.go:358] Setting ErrFile to fd 2...
	I0920 18:07:39.504037  262197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:07:39.504275  262197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:07:39.504887  262197 out.go:352] Setting JSON to false
	I0920 18:07:39.505896  262197 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6602,"bootTime":1726849057,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:07:39.506020  262197 start.go:139] virtualization: kvm guest
	I0920 18:07:39.508454  262197 out.go:177] * [ha-347193] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:07:39.510022  262197 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:07:39.510065  262197 notify.go:220] Checking for updates...
	I0920 18:07:39.512364  262197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:07:39.513870  262197 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:07:39.515227  262197 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:07:39.516549  262197 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:07:39.517938  262197 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:07:39.520032  262197 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:07:39.520168  262197 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:07:39.520853  262197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:07:39.520925  262197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:07:39.538153  262197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40871
	I0920 18:07:39.538609  262197 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:07:39.539327  262197 main.go:141] libmachine: Using API Version  1
	I0920 18:07:39.539356  262197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:07:39.539761  262197 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:07:39.540021  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:07:39.578042  262197 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:07:39.579472  262197 start.go:297] selected driver: kvm2
	I0920 18:07:39.579501  262197 start.go:901] validating driver "kvm2" against &{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.234 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:07:39.579704  262197 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:07:39.580186  262197 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:07:39.580317  262197 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:07:39.596678  262197 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:07:39.597435  262197 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:07:39.597481  262197 cni.go:84] Creating CNI manager for ""
	I0920 18:07:39.597531  262197 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 18:07:39.597594  262197 start.go:340] cluster config:
	{Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.234 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:07:39.597737  262197 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:07:39.599843  262197 out.go:177] * Starting "ha-347193" primary control-plane node in "ha-347193" cluster
	I0920 18:07:39.601132  262197 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:07:39.601199  262197 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:07:39.601211  262197 cache.go:56] Caching tarball of preloaded images
	I0920 18:07:39.601321  262197 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:07:39.601333  262197 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:07:39.601444  262197 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/config.json ...
	I0920 18:07:39.601681  262197 start.go:360] acquireMachinesLock for ha-347193: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:07:39.601729  262197 start.go:364] duration metric: took 27.402µs to acquireMachinesLock for "ha-347193"
	I0920 18:07:39.601744  262197 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:07:39.601749  262197 fix.go:54] fixHost starting: 
	I0920 18:07:39.602028  262197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:07:39.602065  262197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:07:39.617626  262197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0920 18:07:39.618189  262197 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:07:39.618694  262197 main.go:141] libmachine: Using API Version  1
	I0920 18:07:39.618721  262197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:07:39.619089  262197 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:07:39.619281  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:07:39.619421  262197 main.go:141] libmachine: (ha-347193) Calling .GetState
	I0920 18:07:39.621577  262197 fix.go:112] recreateIfNeeded on ha-347193: state=Running err=<nil>
	W0920 18:07:39.621629  262197 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:07:39.624120  262197 out.go:177] * Updating the running kvm2 "ha-347193" VM ...
	I0920 18:07:39.625648  262197 machine.go:93] provisionDockerMachine start ...
	I0920 18:07:39.625685  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:07:39.626013  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:07:39.629148  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:39.629675  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:07:39.629732  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:39.629932  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:07:39.630138  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:39.630314  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:39.630438  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:07:39.630660  262197 main.go:141] libmachine: Using SSH client type: native
	I0920 18:07:39.630880  262197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:07:39.630892  262197 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:07:39.743087  262197 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-347193
	
	I0920 18:07:39.743119  262197 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 18:07:39.743431  262197 buildroot.go:166] provisioning hostname "ha-347193"
	I0920 18:07:39.743464  262197 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 18:07:39.743663  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:07:39.747258  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:39.747630  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:07:39.747662  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:39.747802  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:07:39.748018  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:39.748170  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:39.748283  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:07:39.748472  262197 main.go:141] libmachine: Using SSH client type: native
	I0920 18:07:39.748703  262197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:07:39.748721  262197 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-347193 && echo "ha-347193" | sudo tee /etc/hostname
	I0920 18:07:39.886045  262197 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-347193
	
	I0920 18:07:39.886084  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:07:39.889057  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:39.889417  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:07:39.889447  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:39.889682  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:07:39.889929  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:39.890168  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:39.890340  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:07:39.890523  262197 main.go:141] libmachine: Using SSH client type: native
	I0920 18:07:39.890729  262197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:07:39.890752  262197 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-347193' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-347193/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-347193' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:07:40.003602  262197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:07:40.003645  262197 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 18:07:40.003683  262197 buildroot.go:174] setting up certificates
	I0920 18:07:40.003701  262197 provision.go:84] configureAuth start
	I0920 18:07:40.003719  262197 main.go:141] libmachine: (ha-347193) Calling .GetMachineName
	I0920 18:07:40.004065  262197 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 18:07:40.007905  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.008489  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:07:40.008513  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.008782  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:07:40.011199  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.011622  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:07:40.011669  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.011859  262197 provision.go:143] copyHostCerts
	I0920 18:07:40.011894  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:07:40.011940  262197 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 18:07:40.011958  262197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:07:40.012029  262197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 18:07:40.012119  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:07:40.012140  262197 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 18:07:40.012144  262197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:07:40.012169  262197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 18:07:40.012209  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:07:40.012230  262197 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 18:07:40.012236  262197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:07:40.012258  262197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 18:07:40.012307  262197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.ha-347193 san=[127.0.0.1 192.168.39.246 ha-347193 localhost minikube]
	I0920 18:07:40.212632  262197 provision.go:177] copyRemoteCerts
	I0920 18:07:40.212709  262197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:07:40.212738  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:07:40.215770  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.216077  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:07:40.216115  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.216351  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:07:40.216614  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:40.216763  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:07:40.216919  262197 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 18:07:40.301135  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:07:40.301229  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:07:40.329452  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:07:40.329557  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0920 18:07:40.355961  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:07:40.356055  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:07:40.383019  262197 provision.go:87] duration metric: took 379.301144ms to configureAuth
	I0920 18:07:40.383051  262197 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:07:40.383287  262197 config.go:182] Loaded profile config "ha-347193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:07:40.383390  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:07:40.386239  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.386599  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:07:40.386623  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:07:40.386823  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:07:40.387107  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:40.387403  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:07:40.387553  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:07:40.387802  262197 main.go:141] libmachine: Using SSH client type: native
	I0920 18:07:40.387984  262197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:07:40.388000  262197 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:09:11.317242  262197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:09:11.317283  262197 machine.go:96] duration metric: took 1m31.691605401s to provisionDockerMachine
	I0920 18:09:11.317296  262197 start.go:293] postStartSetup for "ha-347193" (driver="kvm2")
	I0920 18:09:11.317316  262197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:09:11.317334  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:09:11.317647  262197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:09:11.317682  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:09:11.322426  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.323312  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:09:11.323345  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.323526  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:09:11.323822  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:09:11.324068  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:09:11.324294  262197 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 18:09:11.414958  262197 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:09:11.419701  262197 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:09:11.419731  262197 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 18:09:11.419827  262197 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 18:09:11.419942  262197 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 18:09:11.419956  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /etc/ssl/certs/2448492.pem
	I0920 18:09:11.420066  262197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:09:11.430425  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:09:11.456605  262197 start.go:296] duration metric: took 139.280961ms for postStartSetup
	I0920 18:09:11.456680  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:09:11.457054  262197 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0920 18:09:11.457090  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:09:11.461168  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.461727  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:09:11.461753  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.462065  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:09:11.462333  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:09:11.462561  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:09:11.462847  262197 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	W0920 18:09:11.549106  262197 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0920 18:09:11.549137  262197 fix.go:56] duration metric: took 1m31.947388158s for fixHost
	I0920 18:09:11.549162  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:09:11.552992  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.553487  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:09:11.553517  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.553693  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:09:11.553985  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:09:11.554242  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:09:11.554421  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:09:11.554622  262197 main.go:141] libmachine: Using SSH client type: native
	I0920 18:09:11.554821  262197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:09:11.554834  262197 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:09:11.667314  262197 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855751.637791068
	
	I0920 18:09:11.667347  262197 fix.go:216] guest clock: 1726855751.637791068
	I0920 18:09:11.667355  262197 fix.go:229] Guest: 2024-09-20 18:09:11.637791068 +0000 UTC Remote: 2024-09-20 18:09:11.549145056 +0000 UTC m=+92.083014358 (delta=88.646012ms)
	I0920 18:09:11.667405  262197 fix.go:200] guest clock delta is within tolerance: 88.646012ms
	I0920 18:09:11.667412  262197 start.go:83] releasing machines lock for "ha-347193", held for 1m32.065673058s
	I0920 18:09:11.667442  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:09:11.667781  262197 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 18:09:11.670992  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.671441  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:09:11.671463  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.671634  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:09:11.672410  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:09:11.672636  262197 main.go:141] libmachine: (ha-347193) Calling .DriverName
	I0920 18:09:11.672746  262197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:09:11.672823  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:09:11.672856  262197 ssh_runner.go:195] Run: cat /version.json
	I0920 18:09:11.672880  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHHostname
	I0920 18:09:11.675595  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.675935  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.676071  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:09:11.676097  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.676266  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:09:11.676334  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:09:11.676354  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:11.676443  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:09:11.676507  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHPort
	I0920 18:09:11.676732  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHKeyPath
	I0920 18:09:11.676733  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:09:11.676903  262197 main.go:141] libmachine: (ha-347193) Calling .GetSSHUsername
	I0920 18:09:11.676903  262197 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 18:09:11.677037  262197 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/ha-347193/id_rsa Username:docker}
	I0920 18:09:11.755449  262197 ssh_runner.go:195] Run: systemctl --version
	I0920 18:09:11.796592  262197 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:09:11.954669  262197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:09:11.964068  262197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:09:11.964160  262197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:09:11.973433  262197 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 18:09:11.973469  262197 start.go:495] detecting cgroup driver to use...
	I0920 18:09:11.973543  262197 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:09:11.990922  262197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:09:12.005474  262197 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:09:12.005537  262197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:09:12.019735  262197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:09:12.034120  262197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:09:12.189059  262197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:09:12.341563  262197 docker.go:233] disabling docker service ...
	I0920 18:09:12.341637  262197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:09:12.359710  262197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:09:12.373603  262197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:09:12.533281  262197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:09:12.679480  262197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:09:12.695167  262197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:09:12.714826  262197 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:09:12.714895  262197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:09:12.725596  262197 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:09:12.725686  262197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:09:12.736797  262197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:09:12.747995  262197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:09:12.758767  262197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:09:12.773849  262197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:09:12.805140  262197 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:09:12.819784  262197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:09:12.832323  262197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:09:12.843665  262197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:09:12.854322  262197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:09:13.004279  262197 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:09:13.293885  262197 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:09:13.293997  262197 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:09:13.300376  262197 start.go:563] Will wait 60s for crictl version
	I0920 18:09:13.300447  262197 ssh_runner.go:195] Run: which crictl
	I0920 18:09:13.304402  262197 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:09:13.342669  262197 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:09:13.342747  262197 ssh_runner.go:195] Run: crio --version
	I0920 18:09:13.373347  262197 ssh_runner.go:195] Run: crio --version
	I0920 18:09:13.405881  262197 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:09:13.407531  262197 main.go:141] libmachine: (ha-347193) Calling .GetIP
	I0920 18:09:13.410513  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:13.410936  262197 main.go:141] libmachine: (ha-347193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:07:bb", ip: ""} in network mk-ha-347193: {Iface:virbr1 ExpiryTime:2024-09-20 18:58:33 +0000 UTC Type:0 Mac:52:54:00:2e:07:bb Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-347193 Clientid:01:52:54:00:2e:07:bb}
	I0920 18:09:13.410958  262197 main.go:141] libmachine: (ha-347193) DBG | domain ha-347193 has defined IP address 192.168.39.246 and MAC address 52:54:00:2e:07:bb in network mk-ha-347193
	I0920 18:09:13.411271  262197 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:09:13.416559  262197 kubeadm.go:883] updating cluster {Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.234 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:09:13.416862  262197 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:09:13.416945  262197 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:09:13.462552  262197 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:09:13.462589  262197 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:09:13.462672  262197 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:09:13.511112  262197 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:09:13.511148  262197 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:09:13.511159  262197 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.31.1 crio true true} ...
	I0920 18:09:13.511278  262197 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-347193 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:09:13.511365  262197 ssh_runner.go:195] Run: crio config
	I0920 18:09:13.562219  262197 cni.go:84] Creating CNI manager for ""
	I0920 18:09:13.562245  262197 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 18:09:13.562258  262197 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:09:13.562282  262197 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-347193 NodeName:ha-347193 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:09:13.562439  262197 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-347193"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:09:13.562460  262197 kube-vip.go:115] generating kube-vip config ...
	I0920 18:09:13.562505  262197 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:09:13.574725  262197 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:09:13.574846  262197 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:09:13.574922  262197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:09:13.585533  262197 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:09:13.585621  262197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 18:09:13.595171  262197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 18:09:13.612730  262197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:09:13.630200  262197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 18:09:13.647994  262197 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:09:13.666720  262197 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:09:13.672026  262197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:09:13.824146  262197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:09:13.840231  262197 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193 for IP: 192.168.39.246
	I0920 18:09:13.840267  262197 certs.go:194] generating shared ca certs ...
	I0920 18:09:13.840289  262197 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:09:13.840449  262197 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 18:09:13.840486  262197 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 18:09:13.840494  262197 certs.go:256] generating profile certs ...
	I0920 18:09:13.840562  262197 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/client.key
	I0920 18:09:13.840591  262197 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.5ff8df21
	I0920 18:09:13.840634  262197 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.5ff8df21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.241 192.168.39.250 192.168.39.254]
	I0920 18:09:14.011532  262197 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.5ff8df21 ...
	I0920 18:09:14.011569  262197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.5ff8df21: {Name:mk87d5be8d22deba5ad64b8a99e8620b7d2383e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:09:14.011757  262197 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.5ff8df21 ...
	I0920 18:09:14.011770  262197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.5ff8df21: {Name:mk3a705862b56533dea29ea23f7e721858c5ac92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:09:14.011841  262197 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt.5ff8df21 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt
	I0920 18:09:14.011984  262197 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key.5ff8df21 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key
	I0920 18:09:14.012113  262197 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key
	I0920 18:09:14.012132  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:09:14.012145  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:09:14.012155  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:09:14.012165  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:09:14.012174  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:09:14.012184  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:09:14.012200  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:09:14.012211  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:09:14.012285  262197 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 18:09:14.012318  262197 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 18:09:14.012327  262197 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:09:14.012347  262197 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:09:14.012367  262197 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:09:14.012388  262197 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 18:09:14.012425  262197 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:09:14.012450  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem -> /usr/share/ca-certificates/244849.pem
	I0920 18:09:14.012463  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /usr/share/ca-certificates/2448492.pem
	I0920 18:09:14.012476  262197 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:09:14.013034  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:09:14.039002  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:09:14.064810  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:09:14.089078  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:09:14.113592  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 18:09:14.138669  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:09:14.165128  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:09:14.191582  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/ha-347193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:09:14.219089  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 18:09:14.246135  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 18:09:14.272148  262197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:09:14.297755  262197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:09:14.315945  262197 ssh_runner.go:195] Run: openssl version
	I0920 18:09:14.322891  262197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 18:09:14.334496  262197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 18:09:14.339723  262197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:09:14.339803  262197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 18:09:14.346098  262197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 18:09:14.357108  262197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 18:09:14.368897  262197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 18:09:14.373459  262197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:09:14.373527  262197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 18:09:14.379325  262197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:09:14.389168  262197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:09:14.400425  262197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:09:14.404876  262197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:09:14.404958  262197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:09:14.410506  262197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:09:14.421884  262197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:09:14.426729  262197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:09:14.432598  262197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:09:14.439146  262197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:09:14.445085  262197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:09:14.451248  262197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:09:14.457219  262197 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:09:14.463553  262197 kubeadm.go:392] StartCluster: {Name:ha-347193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-347193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.234 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:09:14.463740  262197 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:09:14.463809  262197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:09:14.510760  262197 cri.go:89] found id: "e00c1c1a07d23b0e7e743bc72a4b8cb588d77f337f2a6d47c89ebe4b153b85cd"
	I0920 18:09:14.510788  262197 cri.go:89] found id: "315821263cc3b7bd2a478cb35322982eda4a845f9fc5b8086022daec034a1460"
	I0920 18:09:14.510792  262197 cri.go:89] found id: "c6f3217d6efc41512b1e0ce34c3d0a20836e299bfc6a4f9f41c15168f43b3366"
	I0920 18:09:14.510795  262197 cri.go:89] found id: "6f54f7a5f2c32d6bc0ec6ee35174b79a19554688494c5a052f414808aba6d5d3"
	I0920 18:09:14.510798  262197 cri.go:89] found id: "998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5"
	I0920 18:09:14.510801  262197 cri.go:89] found id: "4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01"
	I0920 18:09:14.510806  262197 cri.go:89] found id: "54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9"
	I0920 18:09:14.510808  262197 cri.go:89] found id: "ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5"
	I0920 18:09:14.510810  262197 cri.go:89] found id: "3702c95ae17f31f21a9df60d65fe7c873d5a4a63c4bb0951d83c81da6fdcdcc9"
	I0920 18:09:14.510815  262197 cri.go:89] found id: "dce6ebcdcfa257db8db2294b5d4f9a4b32180c57f4a35e50afb651fea43c30c4"
	I0920 18:09:14.510818  262197 cri.go:89] found id: "b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d"
	I0920 18:09:14.510820  262197 cri.go:89] found id: "6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f"
	I0920 18:09:14.510823  262197 cri.go:89] found id: "5db95e41c4eeebdea4c6e2d542a978f368ff56b30edfe1ff54593feed82c7c09"
	I0920 18:09:14.510826  262197 cri.go:89] found id: ""
	I0920 18:09:14.510892  262197 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.894738922Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856065894712358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe837fed-eb3a-4d57-8c45-377df92a017f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.895503558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=494f69ea-f87f-4277-beac-3802128c6260 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.895573005Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=494f69ea-f87f-4277-beac-3802128c6260 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.895994201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a32639c7e7c0be6723028368216a23bf1bb33fbcb99b430ed46adf2e1ea4e331,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855839618059722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf83a94c43f015b6bd1ac29c32b1c20d057598c03b801cd5b210ddc46cd83d7f,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855801629029619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aaef813076b4a046a50b286bf9e3353a117fb39496ae588ea8966417cc50dc,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855801613530695,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913c5022617727dacc8da0546cc89a522d28cdb23e4cfeffe185e1ed86ddf24e,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726855795610657675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86283c7ec4b8b76ad57646f8ede71f8fbd2488da1cdc5e493d7e2e3981b503,PodSandboxId:01ad1d593334bd4f609544ef6f23ed3429b6ad565f373acf5cb64daf6fc99cf6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855792920406648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab0f9006f2cb385c7ebb9a923435abda5a700ecf229499c129912b006a9c348,PodSandboxId:1cb5129054927c6f3931cf21cd9e4e1401868fb3a6b5181918e84c0f803cad17,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855774324549826,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8015b973cd7ad8950a83e2c6acba07ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d376e9770a85a40d5a2f2a68e191b487630df8377fcfcb26b99623bdf354431a,PodSandboxId:5e0f11d03a2a1c6055b2a886ed8284a020be5770487b95a28f5f489e7cfd5757,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855760023951931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e4e60e2dd4d595c0dba3467787581b6abb97b4ea3dc50c7556910dea480a591c,PodSandboxId:b9937d468b42f5b8c251f10008f20fb114c4b810d43836929fdf9e36428b8708,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759751836563,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6758e679045b017c3f1bb18c6052183ee469e16e30b035c3a54985570a4731f,PodSandboxId:0b4488f4dcb2291f5eb49780da9bfcb337b9a164a3168977698df5c75b65fb0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855759684501493,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb34dd2113dfdecc52e4e844500d8e47d56b75ddd0f12a74b7babec6601a732,PodSandboxId:7b094de6a3f1dec8c2b782fd9c3a5654d9d40297fecf6d49728073f2c4e1db91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855759520928633,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c46db4ca04a2f60a9e34d715c267fd8cfabc0d403c54d77c65f4c3a60a54e315,PodSandboxId:71332909fe6bc0909cff841b788d7cb263b84fb5b02e0134879011618e5ee2a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759600999590,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9adca152c5d4c155cc64086c8ecb73c1c77d7152d7a3749d702f64c98f89e7e,PodSandboxId:fe4d3c58e8ee5c064f06fd55412e29e823053f7b801edac843e6f5836a9baab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855759453448071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154
fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f71a8c2fd0f7fa3427392665a6a4fe2a1aebe4cacfcabb32935483b98ad88ba1,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855759448735001,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6994ccac0eaf33e08af026f534ceacfa702115682033809428acc70f8d685d8e,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855759320053700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726855304216934251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158811909217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158741057313,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855147923732386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726855146590192031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855135139539095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726855135143537875,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=494f69ea-f87f-4277-beac-3802128c6260 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.936523661Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1365b1e8-2ab6-4f40-ac67-00788a62f012 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.936597287Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1365b1e8-2ab6-4f40-ac67-00788a62f012 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.938029260Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4402eb41-ad4d-469f-9a1a-f30133214210 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.938517051Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856065938492642,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4402eb41-ad4d-469f-9a1a-f30133214210 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.938999658Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5315c437-3b8c-4f75-8fa4-7e87662714d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.939058457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5315c437-3b8c-4f75-8fa4-7e87662714d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.939680309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a32639c7e7c0be6723028368216a23bf1bb33fbcb99b430ed46adf2e1ea4e331,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855839618059722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf83a94c43f015b6bd1ac29c32b1c20d057598c03b801cd5b210ddc46cd83d7f,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855801629029619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aaef813076b4a046a50b286bf9e3353a117fb39496ae588ea8966417cc50dc,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855801613530695,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913c5022617727dacc8da0546cc89a522d28cdb23e4cfeffe185e1ed86ddf24e,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726855795610657675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86283c7ec4b8b76ad57646f8ede71f8fbd2488da1cdc5e493d7e2e3981b503,PodSandboxId:01ad1d593334bd4f609544ef6f23ed3429b6ad565f373acf5cb64daf6fc99cf6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855792920406648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab0f9006f2cb385c7ebb9a923435abda5a700ecf229499c129912b006a9c348,PodSandboxId:1cb5129054927c6f3931cf21cd9e4e1401868fb3a6b5181918e84c0f803cad17,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855774324549826,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8015b973cd7ad8950a83e2c6acba07ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d376e9770a85a40d5a2f2a68e191b487630df8377fcfcb26b99623bdf354431a,PodSandboxId:5e0f11d03a2a1c6055b2a886ed8284a020be5770487b95a28f5f489e7cfd5757,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855760023951931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e4e60e2dd4d595c0dba3467787581b6abb97b4ea3dc50c7556910dea480a591c,PodSandboxId:b9937d468b42f5b8c251f10008f20fb114c4b810d43836929fdf9e36428b8708,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759751836563,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6758e679045b017c3f1bb18c6052183ee469e16e30b035c3a54985570a4731f,PodSandboxId:0b4488f4dcb2291f5eb49780da9bfcb337b9a164a3168977698df5c75b65fb0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855759684501493,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb34dd2113dfdecc52e4e844500d8e47d56b75ddd0f12a74b7babec6601a732,PodSandboxId:7b094de6a3f1dec8c2b782fd9c3a5654d9d40297fecf6d49728073f2c4e1db91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855759520928633,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c46db4ca04a2f60a9e34d715c267fd8cfabc0d403c54d77c65f4c3a60a54e315,PodSandboxId:71332909fe6bc0909cff841b788d7cb263b84fb5b02e0134879011618e5ee2a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759600999590,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9adca152c5d4c155cc64086c8ecb73c1c77d7152d7a3749d702f64c98f89e7e,PodSandboxId:fe4d3c58e8ee5c064f06fd55412e29e823053f7b801edac843e6f5836a9baab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855759453448071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154
fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f71a8c2fd0f7fa3427392665a6a4fe2a1aebe4cacfcabb32935483b98ad88ba1,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855759448735001,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6994ccac0eaf33e08af026f534ceacfa702115682033809428acc70f8d685d8e,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855759320053700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726855304216934251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158811909217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158741057313,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855147923732386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726855146590192031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855135139539095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726855135143537875,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5315c437-3b8c-4f75-8fa4-7e87662714d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.980136891Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82633f10-28be-42ac-8896-8ce09f9e21f7 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.980209725Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82633f10-28be-42ac-8896-8ce09f9e21f7 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.981359823Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2db8115-74be-45af-b4b1-1806e15bb4b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.981829278Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856065981805311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2db8115-74be-45af-b4b1-1806e15bb4b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.982451025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=008abcfa-5365-4d77-bd0e-ebd665bd08e5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.982513809Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=008abcfa-5365-4d77-bd0e-ebd665bd08e5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:14:25 ha-347193 crio[3597]: time="2024-09-20 18:14:25.983543775Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a32639c7e7c0be6723028368216a23bf1bb33fbcb99b430ed46adf2e1ea4e331,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855839618059722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf83a94c43f015b6bd1ac29c32b1c20d057598c03b801cd5b210ddc46cd83d7f,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855801629029619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aaef813076b4a046a50b286bf9e3353a117fb39496ae588ea8966417cc50dc,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855801613530695,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913c5022617727dacc8da0546cc89a522d28cdb23e4cfeffe185e1ed86ddf24e,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726855795610657675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86283c7ec4b8b76ad57646f8ede71f8fbd2488da1cdc5e493d7e2e3981b503,PodSandboxId:01ad1d593334bd4f609544ef6f23ed3429b6ad565f373acf5cb64daf6fc99cf6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855792920406648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab0f9006f2cb385c7ebb9a923435abda5a700ecf229499c129912b006a9c348,PodSandboxId:1cb5129054927c6f3931cf21cd9e4e1401868fb3a6b5181918e84c0f803cad17,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855774324549826,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8015b973cd7ad8950a83e2c6acba07ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d376e9770a85a40d5a2f2a68e191b487630df8377fcfcb26b99623bdf354431a,PodSandboxId:5e0f11d03a2a1c6055b2a886ed8284a020be5770487b95a28f5f489e7cfd5757,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855760023951931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e4e60e2dd4d595c0dba3467787581b6abb97b4ea3dc50c7556910dea480a591c,PodSandboxId:b9937d468b42f5b8c251f10008f20fb114c4b810d43836929fdf9e36428b8708,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759751836563,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6758e679045b017c3f1bb18c6052183ee469e16e30b035c3a54985570a4731f,PodSandboxId:0b4488f4dcb2291f5eb49780da9bfcb337b9a164a3168977698df5c75b65fb0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855759684501493,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb34dd2113dfdecc52e4e844500d8e47d56b75ddd0f12a74b7babec6601a732,PodSandboxId:7b094de6a3f1dec8c2b782fd9c3a5654d9d40297fecf6d49728073f2c4e1db91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855759520928633,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c46db4ca04a2f60a9e34d715c267fd8cfabc0d403c54d77c65f4c3a60a54e315,PodSandboxId:71332909fe6bc0909cff841b788d7cb263b84fb5b02e0134879011618e5ee2a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759600999590,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9adca152c5d4c155cc64086c8ecb73c1c77d7152d7a3749d702f64c98f89e7e,PodSandboxId:fe4d3c58e8ee5c064f06fd55412e29e823053f7b801edac843e6f5836a9baab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855759453448071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154
fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f71a8c2fd0f7fa3427392665a6a4fe2a1aebe4cacfcabb32935483b98ad88ba1,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855759448735001,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6994ccac0eaf33e08af026f534ceacfa702115682033809428acc70f8d685d8e,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855759320053700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726855304216934251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158811909217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158741057313,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855147923732386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726855146590192031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855135139539095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726855135143537875,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=008abcfa-5365-4d77-bd0e-ebd665bd08e5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:14:26 ha-347193 crio[3597]: time="2024-09-20 18:14:26.024869600Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e62376d0-127f-43a3-8a2a-2dfc68c70059 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:14:26 ha-347193 crio[3597]: time="2024-09-20 18:14:26.024962360Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e62376d0-127f-43a3-8a2a-2dfc68c70059 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:14:26 ha-347193 crio[3597]: time="2024-09-20 18:14:26.026068424Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e2ba961-dcb5-4dd2-8d82-b726ffc413ae name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:14:26 ha-347193 crio[3597]: time="2024-09-20 18:14:26.026615858Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856066026589604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e2ba961-dcb5-4dd2-8d82-b726ffc413ae name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:14:26 ha-347193 crio[3597]: time="2024-09-20 18:14:26.027224816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc8964fe-acf2-4ccd-a261-2a6392d92e2a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:14:26 ha-347193 crio[3597]: time="2024-09-20 18:14:26.027317469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc8964fe-acf2-4ccd-a261-2a6392d92e2a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:14:26 ha-347193 crio[3597]: time="2024-09-20 18:14:26.027787762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a32639c7e7c0be6723028368216a23bf1bb33fbcb99b430ed46adf2e1ea4e331,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726855839618059722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf83a94c43f015b6bd1ac29c32b1c20d057598c03b801cd5b210ddc46cd83d7f,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726855801629029619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aaef813076b4a046a50b286bf9e3353a117fb39496ae588ea8966417cc50dc,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726855801613530695,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913c5022617727dacc8da0546cc89a522d28cdb23e4cfeffe185e1ed86ddf24e,PodSandboxId:7632b2b5457be0d21dfe88928eba2b68988602a96077a81b46aa31cd4d7f107a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726855795610657675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8924f7ce-85a0-4587-9c05-8a74c7113e9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b86283c7ec4b8b76ad57646f8ede71f8fbd2488da1cdc5e493d7e2e3981b503,PodSandboxId:01ad1d593334bd4f609544ef6f23ed3429b6ad565f373acf5cb64daf6fc99cf6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726855792920406648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab0f9006f2cb385c7ebb9a923435abda5a700ecf229499c129912b006a9c348,PodSandboxId:1cb5129054927c6f3931cf21cd9e4e1401868fb3a6b5181918e84c0f803cad17,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726855774324549826,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8015b973cd7ad8950a83e2c6acba07ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d376e9770a85a40d5a2f2a68e191b487630df8377fcfcb26b99623bdf354431a,PodSandboxId:5e0f11d03a2a1c6055b2a886ed8284a020be5770487b95a28f5f489e7cfd5757,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726855760023951931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e4e60e2dd4d595c0dba3467787581b6abb97b4ea3dc50c7556910dea480a591c,PodSandboxId:b9937d468b42f5b8c251f10008f20fb114c4b810d43836929fdf9e36428b8708,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759751836563,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6758e679045b017c3f1bb18c6052183ee469e16e30b035c3a54985570a4731f,PodSandboxId:0b4488f4dcb2291f5eb49780da9bfcb337b9a164a3168977698df5c75b65fb0c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726855759684501493,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb34dd2113dfdecc52e4e844500d8e47d56b75ddd0f12a74b7babec6601a732,PodSandboxId:7b094de6a3f1dec8c2b782fd9c3a5654d9d40297fecf6d49728073f2c4e1db91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726855759520928633,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c46db4ca04a2f60a9e34d715c267fd8cfabc0d403c54d77c65f4c3a60a54e315,PodSandboxId:71332909fe6bc0909cff841b788d7cb263b84fb5b02e0134879011618e5ee2a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726855759600999590,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9adca152c5d4c155cc64086c8ecb73c1c77d7152d7a3749d702f64c98f89e7e,PodSandboxId:fe4d3c58e8ee5c064f06fd55412e29e823053f7b801edac843e6f5836a9baab0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726855759453448071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154
fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f71a8c2fd0f7fa3427392665a6a4fe2a1aebe4cacfcabb32935483b98ad88ba1,PodSandboxId:4d10a800b45c8f4bc465562ec9c102143b39510a15dce3a2f692025a08d8fac6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726855759448735001,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df11049388f77c40514c1c2090f11fb0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6994ccac0eaf33e08af026f534ceacfa702115682033809428acc70f8d685d8e,PodSandboxId:87436c1b35bb902bec09e2aa397f6cec52bd5de1fc0ce27345e231fdb57e223e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726855759320053700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04bd7de12a4c2bf871def7b0b4edffb,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d13f339c817f2c3d48c059df5dbe95c744b4d770d8f156847fa2a2faddfb16,PodSandboxId:d56c4fb5022a404a46d5f7d82850d40a0aa7777c181ec28335c613507545a0bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726855304216934251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vv8nw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85b32a80-3a6e-48f0-a2ec-3ba8ab13dcd9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5,PodSandboxId:cfb097797b5199e13abd148addc36710b46ff21a5a455fd00cb451d38da7e05d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158811909217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6llmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ddaa5e4-3f5d-4f5b-96c3-d4f3d5a94a92,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01,PodSandboxId:503157b6402f3fdb0fbd3acb2400d53236322647d77c96f1052ed35d749180f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726855158741057313,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7862a6e-54cc-450c-b283-d20fb99f51ce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9,PodSandboxId:d420593f085b4cce05c57a111a173b0556ed0c67debb38fe8785523e9aaf085f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726855147923732386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rdqkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ae4e37-b29b-400a-af2d-544da4024069,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5,PodSandboxId:db50f6f39d94cb0d879c4bc3f389365a1aefdeeb9341d4b7a315ad9363b767e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726855146590192031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z24zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9271d251-2d95-4b23-85f3-7da6567b2fc3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f,PodSandboxId:6700b91af83d5fda728b258af762ef66bdc5850c898a59cb3d1e0f4a4a4f5bf2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726855135139539095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c1c788fcec8342e1c63532c81c0089,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d,PodSandboxId:3b399285f0a3eef630d7233d6655753e34da36bbb3233127467544caa535e7b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726855135143537875,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-347193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2162ed84fa6d6f0dc154fd669d8f73d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc8964fe-acf2-4ccd-a261-2a6392d92e2a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a32639c7e7c0b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   7632b2b5457be       storage-provisioner
	bf83a94c43f01       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   87436c1b35bb9       kube-controller-manager-ha-347193
	31aaef813076b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   4d10a800b45c8       kube-apiserver-ha-347193
	913c502261772       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   7632b2b5457be       storage-provisioner
	1b86283c7ec4b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   01ad1d593334b       busybox-7dff88458-vv8nw
	dab0f9006f2cb       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   1cb5129054927       kube-vip-ha-347193
	d376e9770a85a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   5e0f11d03a2a1       kube-proxy-rdqkg
	e4e60e2dd4d59       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   b9937d468b42f       coredns-7c65d6cfc9-6llmd
	f6758e679045b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   0b4488f4dcb22       kube-scheduler-ha-347193
	c46db4ca04a2f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   71332909fe6bc       coredns-7c65d6cfc9-bkmhn
	afb34dd2113df       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   7b094de6a3f1d       kindnet-z24zp
	e9adca152c5d4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   fe4d3c58e8ee5       etcd-ha-347193
	f71a8c2fd0f7f       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   4d10a800b45c8       kube-apiserver-ha-347193
	6994ccac0eaf3       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Exited              kube-controller-manager   1                   87436c1b35bb9       kube-controller-manager-ha-347193
	24d13f339c817       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   12 minutes ago      Exited              busybox                   0                   d56c4fb5022a4       busybox-7dff88458-vv8nw
	998d6fb086954       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   cfb097797b519       coredns-7c65d6cfc9-6llmd
	4980eee34ad3b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   503157b6402f3       coredns-7c65d6cfc9-bkmhn
	54d750519756c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      15 minutes ago      Exited              kube-proxy                0                   d420593f085b4       kube-proxy-rdqkg
	ebfa9fcdc2495       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      15 minutes ago      Exited              kindnet-cni               0                   db50f6f39d94c       kindnet-z24zp
	b9e6f76c6e332       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      15 minutes ago      Exited              etcd                      0                   3b399285f0a3e       etcd-ha-347193
	6cae0975e4bde       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      15 minutes ago      Exited              kube-scheduler            0                   6700b91af83d5       kube-scheduler-ha-347193
	
	
	==> coredns [4980eee34ad3ba7da75cc4e9c516761b3da86e3c11f0af0117346b728f906c01] <==
	[INFO] 10.244.1.2:35811 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004936568s
	[INFO] 10.244.1.2:36016 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003046132s
	[INFO] 10.244.1.2:34653 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170016s
	[INFO] 10.244.1.2:59470 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145491s
	[INFO] 10.244.2.2:50581 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001424335s
	[INFO] 10.244.2.2:53657 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087743s
	[INFO] 10.244.0.4:45468 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002017081s
	[INFO] 10.244.0.4:50151 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148946s
	[INFO] 10.244.0.4:51594 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101915s
	[INFO] 10.244.0.4:54414 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114937s
	[INFO] 10.244.1.2:38701 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218522s
	[INFO] 10.244.1.2:41853 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000182128s
	[INFO] 10.244.2.2:48909 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169464s
	[INFO] 10.244.0.4:55409 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111385s
	[INFO] 10.244.1.2:58822 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137575s
	[INFO] 10.244.2.2:55178 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124535s
	[INFO] 10.244.2.2:44350 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150664s
	[INFO] 10.244.0.4:57962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114195s
	[INFO] 10.244.0.4:56551 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094805s
	[INFO] 10.244.0.4:45171 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054433s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1809&timeout=6m58s&timeoutSeconds=418&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1758&timeout=5m40s&timeoutSeconds=340&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1762&timeout=9m40s&timeoutSeconds=580&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [998d6fb086954d2271938d16ca72cdb79faa1494e63c63b9f8f9e4d11a4ea4e5] <==
	[INFO] 10.244.2.2:38149 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001158s
	[INFO] 10.244.2.2:42221 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000113646s
	[INFO] 10.244.2.2:49599 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173465s
	[INFO] 10.244.0.4:60750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180138s
	[INFO] 10.244.0.4:46666 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171665s
	[INFO] 10.244.0.4:52002 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001444571s
	[INFO] 10.244.0.4:45151 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006024s
	[INFO] 10.244.1.2:34989 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195829s
	[INFO] 10.244.1.2:34116 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087145s
	[INFO] 10.244.2.2:41553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124108s
	[INFO] 10.244.2.2:35637 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116822s
	[INFO] 10.244.2.2:34355 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111835s
	[INFO] 10.244.0.4:48848 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165085s
	[INFO] 10.244.0.4:49930 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082351s
	[INFO] 10.244.0.4:35945 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077731s
	[INFO] 10.244.1.2:37666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145796s
	[INFO] 10.244.1.2:50941 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000259758s
	[INFO] 10.244.1.2:52591 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141872s
	[INFO] 10.244.2.2:39683 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141964s
	[INFO] 10.244.2.2:51672 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176831s
	[INFO] 10.244.0.4:58285 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000193464s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	
	
	==> coredns [c46db4ca04a2f60a9e34d715c267fd8cfabc0d403c54d77c65f4c3a60a54e315] <==
	[INFO] plugin/kubernetes: Trace[391238245]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 18:09:25.010) (total time: 10001ms):
	Trace[391238245]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:09:35.011)
	Trace[391238245]: [10.001526491s] [10.001526491s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e4e60e2dd4d595c0dba3467787581b6abb97b4ea3dc50c7556910dea480a591c] <==
	Trace[991853822]: [10.00142678s] [10.00142678s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59252->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59252->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59240->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1914598391]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 18:09:31.325) (total time: 10379ms):
	Trace[1914598391]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59240->10.96.0.1:443: read: connection reset by peer 10379ms (18:09:41.704)
	Trace[1914598391]: [10.379690737s] [10.379690737s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59240->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-347193
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_59_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:59:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:14:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:10:04 +0000   Fri, 20 Sep 2024 17:59:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:10:04 +0000   Fri, 20 Sep 2024 17:59:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:10:04 +0000   Fri, 20 Sep 2024 17:59:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:10:04 +0000   Fri, 20 Sep 2024 17:59:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-347193
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 24c3d61093c44fc4b2898b98b4bdbc70
	  System UUID:                24c3d610-93c4-4fc4-b289-8b98b4bdbc70
	  Boot ID:                    5638bfe2-e986-4137-9385-e18b7e4b519b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vv8nw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-6llmd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-bkmhn             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-347193                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-z24zp                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-347193             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-347193    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-rdqkg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-347193             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-347193                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m23s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-347193 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-347193 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-347193 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-347193 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-347193 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-347193 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           15m                    node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-347193 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Warning  ContainerGCFailed        5m25s (x2 over 6m25s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m16s (x3 over 6m5s)   kubelet          Node ha-347193 status is now: NodeNotReady
	  Normal   RegisteredNode           4m25s                  node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Normal   RegisteredNode           4m19s                  node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-347193 event: Registered Node ha-347193 in Controller
	
	
	Name:               ha-347193-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T17_59_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:14:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:12:58 +0000   Fri, 20 Sep 2024 18:12:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:12:58 +0000   Fri, 20 Sep 2024 18:12:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:12:58 +0000   Fri, 20 Sep 2024 18:12:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:12:58 +0000   Fri, 20 Sep 2024 18:12:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-347193-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 325a97217aeb4c8f9cb24edad597fd25
	  System UUID:                325a9721-7aeb-4c8f-9cb2-4edad597fd25
	  Boot ID:                    6084d441-850a-4962-98a2-2de79ae637fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-85fk6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-347193-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-cqbxl                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-347193-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-347193-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-ffdvq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-347193-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-347193-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-347193-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-347193-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-347193-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                    node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  NodeNotReady             10m                    node-controller  Node ha-347193-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m49s (x8 over 4m49s)  kubelet          Node ha-347193-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m49s (x8 over 4m49s)  kubelet          Node ha-347193-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m49s (x7 over 4m49s)  kubelet          Node ha-347193-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m25s                  node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-347193-m02 event: Registered Node ha-347193-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-347193-m02 status is now: NodeNotReady
	
	
	Name:               ha-347193-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-347193-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=ha-347193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_02_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:02:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-347193-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:11:59 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 18:11:39 +0000   Fri, 20 Sep 2024 18:12:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 18:11:39 +0000   Fri, 20 Sep 2024 18:12:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 18:11:39 +0000   Fri, 20 Sep 2024 18:12:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 18:11:39 +0000   Fri, 20 Sep 2024 18:12:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    ha-347193-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 36beb0176a7e4c449ee02f4adaf970e8
	  System UUID:                36beb017-6a7e-4c44-9ee0-2f4adaf970e8
	  Boot ID:                    692f9cba-783e-41cb-9228-7e49d59be4d7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vg68d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-t5f94              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-gtwzd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   CIDRAssignmentFailed     12m                    cidrAllocator    Node ha-347193-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-347193-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-347193-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-347193-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal   NodeReady                11m                    kubelet          Node ha-347193-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m25s                  node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal   RegisteredNode           4m19s                  node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-347193-m04 event: Registered Node ha-347193-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m47s (x3 over 2m47s)  kubelet          Node ha-347193-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x3 over 2m47s)  kubelet          Node ha-347193-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x3 over 2m47s)  kubelet          Node ha-347193-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s (x2 over 2m47s)  kubelet          Node ha-347193-m04 has been rebooted, boot id: 692f9cba-783e-41cb-9228-7e49d59be4d7
	  Normal   NodeReady                2m47s (x2 over 2m47s)  kubelet          Node ha-347193-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s (x2 over 3m45s)   node-controller  Node ha-347193-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +9.314105] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.055929] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059483] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.173430] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.132192] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.252987] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.876503] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +5.009721] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.059883] kauditd_printk_skb: 158 callbacks suppressed
	[Sep20 17:59] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.095619] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.048443] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.211237] kauditd_printk_skb: 38 callbacks suppressed
	[Sep20 18:00] kauditd_printk_skb: 24 callbacks suppressed
	[Sep20 18:09] systemd-fstab-generator[3521]: Ignoring "noauto" option for root device
	[  +0.157859] systemd-fstab-generator[3533]: Ignoring "noauto" option for root device
	[  +0.187478] systemd-fstab-generator[3547]: Ignoring "noauto" option for root device
	[  +0.150277] systemd-fstab-generator[3559]: Ignoring "noauto" option for root device
	[  +0.317882] systemd-fstab-generator[3587]: Ignoring "noauto" option for root device
	[  +0.825728] systemd-fstab-generator[3683]: Ignoring "noauto" option for root device
	[  +5.320469] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.057031] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.052446] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.831692] kauditd_printk_skb: 5 callbacks suppressed
	[Sep20 18:10] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [b9e6f76c6e332b06894feb75a28554e53b9054c96ab97ebcb8ef857cf93bf69d] <==
	2024/09/20 18:07:40 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 18:07:40 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-20T18:07:40.582958Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.246:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:07:40.583046Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.246:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T18:07:40.584582Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b19954eb16571c64","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-20T18:07:40.584786Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7525355198545e9d"}
	{"level":"info","ts":"2024-09-20T18:07:40.584819Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7525355198545e9d"}
	{"level":"info","ts":"2024-09-20T18:07:40.584845Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7525355198545e9d"}
	{"level":"info","ts":"2024-09-20T18:07:40.584914Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b19954eb16571c64","remote-peer-id":"7525355198545e9d"}
	{"level":"info","ts":"2024-09-20T18:07:40.584990Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"7525355198545e9d"}
	{"level":"info","ts":"2024-09-20T18:07:40.585055Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"7525355198545e9d"}
	{"level":"info","ts":"2024-09-20T18:07:40.585091Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7525355198545e9d"}
	{"level":"info","ts":"2024-09-20T18:07:40.585117Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:07:40.585150Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:07:40.585196Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:07:40.585353Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:07:40.585413Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:07:40.585467Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:07:40.585500Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:07:40.590619Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.246:2380"}
	{"level":"warn","ts":"2024-09-20T18:07:40.590725Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.845116918s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-20T18:07:40.590919Z","caller":"traceutil/trace.go:171","msg":"trace[514827633] range","detail":"{range_begin:; range_end:; }","duration":"8.845328468s","start":"2024-09-20T18:07:31.745581Z","end":"2024-09-20T18:07:40.590910Z","steps":["trace[514827633] 'agreement among raft nodes before linearized reading'  (duration: 8.845114802s)"],"step_count":1}
	{"level":"error","ts":"2024-09-20T18:07:40.590984Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-20T18:07:40.590889Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.246:2380"}
	{"level":"info","ts":"2024-09-20T18:07:40.591202Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-347193","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.246:2380"],"advertise-client-urls":["https://192.168.39.246:2379"]}
	
	
	==> etcd [e9adca152c5d4c155cc64086c8ecb73c1c77d7152d7a3749d702f64c98f89e7e] <==
	{"level":"info","ts":"2024-09-20T18:11:02.068341Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:11:02.080059Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b19954eb16571c64","to":"c8ee87ebd06db0cf","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-20T18:11:02.080200Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:11:02.085211Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:11:02.089679Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b19954eb16571c64","to":"c8ee87ebd06db0cf","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-20T18:11:02.089795Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:11:02.098053Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"warn","ts":"2024-09-20T18:11:52.519438Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.250:48234","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-09-20T18:11:52.531625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b19954eb16571c64 switched to configuration voters=(8441211701140151965 12797353184818830436)"}
	{"level":"info","ts":"2024-09-20T18:11:52.534104Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"7954d586cad9e091","local-member-id":"b19954eb16571c64","removed-remote-peer-id":"c8ee87ebd06db0cf","removed-remote-peer-urls":["https://192.168.39.250:2380"]}
	{"level":"info","ts":"2024-09-20T18:11:52.534230Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"warn","ts":"2024-09-20T18:11:52.534719Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:11:52.534756Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"warn","ts":"2024-09-20T18:11:52.535064Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:11:52.535091Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:11:52.535548Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"warn","ts":"2024-09-20T18:11:52.535804Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf","error":"context canceled"}
	{"level":"warn","ts":"2024-09-20T18:11:52.535864Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"c8ee87ebd06db0cf","error":"failed to read c8ee87ebd06db0cf on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-20T18:11:52.535904Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"warn","ts":"2024-09-20T18:11:52.536059Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf","error":"context canceled"}
	{"level":"info","ts":"2024-09-20T18:11:52.536137Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:11:52.536159Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"info","ts":"2024-09-20T18:11:52.536173Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"b19954eb16571c64","removed-remote-peer-id":"c8ee87ebd06db0cf"}
	{"level":"warn","ts":"2024-09-20T18:11:52.545786Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"b19954eb16571c64","remote-peer-id-stream-handler":"b19954eb16571c64","remote-peer-id-from":"c8ee87ebd06db0cf"}
	{"level":"warn","ts":"2024-09-20T18:11:52.558683Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"b19954eb16571c64","remote-peer-id-stream-handler":"b19954eb16571c64","remote-peer-id-from":"c8ee87ebd06db0cf"}
	
	
	==> kernel <==
	 18:14:26 up 16 min,  0 users,  load average: 0.16, 0.31, 0.24
	Linux ha-347193 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [afb34dd2113dfdecc52e4e844500d8e47d56b75ddd0f12a74b7babec6601a732] <==
	I0920 18:13:40.815067       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:13:50.821977       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:13:50.822131       1 main.go:299] handling current node
	I0920 18:13:50.822165       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:13:50.822186       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:13:50.822464       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:13:50.822530       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:14:00.813342       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:14:00.813391       1 main.go:299] handling current node
	I0920 18:14:00.813410       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:14:00.813416       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:14:00.813623       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:14:00.813634       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:14:10.818150       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:14:10.818202       1 main.go:299] handling current node
	I0920 18:14:10.818223       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:14:10.818229       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:14:10.818434       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:14:10.818454       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:14:20.813454       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:14:20.813517       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:14:20.813784       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:14:20.813808       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:14:20.813860       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:14:20.813878       1 main.go:299] handling current node
	
	
	==> kindnet [ebfa9fcdc249565c06dbd406b92ef0914d48ad740d3028164e30fd402a46e7b5] <==
	I0920 18:07:17.652140       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:07:17.652233       1 main.go:299] handling current node
	I0920 18:07:17.652255       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:07:17.652260       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:07:17.652547       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:07:17.652607       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:07:17.652723       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:07:17.652742       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	I0920 18:07:27.660457       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:07:27.660547       1 main.go:299] handling current node
	I0920 18:07:27.660571       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:07:27.660578       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:07:27.660875       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:07:27.660936       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:07:27.661042       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:07:27.661061       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	E0920 18:07:28.584892       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1809&timeout=6m47s&timeoutSeconds=407&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0920 18:07:37.656105       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:07:37.656144       1 main.go:299] handling current node
	I0920 18:07:37.656159       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0920 18:07:37.656166       1 main.go:322] Node ha-347193-m02 has CIDR [10.244.1.0/24] 
	I0920 18:07:37.656438       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I0920 18:07:37.656497       1 main.go:322] Node ha-347193-m03 has CIDR [10.244.2.0/24] 
	I0920 18:07:37.656601       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0920 18:07:37.656730       1 main.go:322] Node ha-347193-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [31aaef813076b4a046a50b286bf9e3353a117fb39496ae588ea8966417cc50dc] <==
	I0920 18:10:03.800904       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0920 18:10:03.886489       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 18:10:03.890675       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 18:10:03.891065       1 policy_source.go:224] refreshing policies
	I0920 18:10:03.893209       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 18:10:03.895245       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 18:10:03.895400       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 18:10:03.897113       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 18:10:03.897202       1 aggregator.go:171] initial CRD sync complete...
	I0920 18:10:03.897250       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 18:10:03.897335       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 18:10:03.897361       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:10:03.899922       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:10:03.900664       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 18:10:03.900689       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 18:10:03.900828       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 18:10:03.901878       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 18:10:03.917732       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0920 18:10:03.920770       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.241 192.168.39.250]
	I0920 18:10:03.931757       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 18:10:03.957803       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0920 18:10:03.970064       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0920 18:10:04.804674       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0920 18:10:05.484327       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.241 192.168.39.246 192.168.39.250]
	W0920 18:12:05.493236       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.241 192.168.39.246]
	
	
	==> kube-apiserver [f71a8c2fd0f7fa3427392665a6a4fe2a1aebe4cacfcabb32935483b98ad88ba1] <==
	I0920 18:09:20.219405       1 options.go:228] external host was not specified, using 192.168.39.246
	I0920 18:09:20.223885       1 server.go:142] Version: v1.31.1
	I0920 18:09:20.223954       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:09:20.633059       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0920 18:09:20.673542       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0920 18:09:20.673628       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0920 18:09:20.673974       1 instance.go:232] Using reconciler: lease
	I0920 18:09:20.674609       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0920 18:09:40.629951       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0920 18:09:40.629957       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0920 18:09:40.675266       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [6994ccac0eaf33e08af026f534ceacfa702115682033809428acc70f8d685d8e] <==
	I0920 18:09:20.849588       1 serving.go:386] Generated self-signed cert in-memory
	I0920 18:09:21.063655       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0920 18:09:21.063751       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:09:21.074167       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0920 18:09:21.074413       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 18:09:21.074436       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 18:09:21.074460       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0920 18:09:41.685646       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.246:8443/healthz\": dial tcp 192.168.39.246:8443: connect: connection refused"
	
	
	==> kube-controller-manager [bf83a94c43f015b6bd1ac29c32b1c20d057598c03b801cd5b210ddc46cd83d7f] <==
	E0920 18:12:47.231467       1 gc_controller.go:151] "Failed to get node" err="node \"ha-347193-m03\" not found" logger="pod-garbage-collector-controller" node="ha-347193-m03"
	E0920 18:12:47.231500       1 gc_controller.go:151] "Failed to get node" err="node \"ha-347193-m03\" not found" logger="pod-garbage-collector-controller" node="ha-347193-m03"
	E0920 18:12:47.231525       1 gc_controller.go:151] "Failed to get node" err="node \"ha-347193-m03\" not found" logger="pod-garbage-collector-controller" node="ha-347193-m03"
	E0920 18:12:47.231548       1 gc_controller.go:151] "Failed to get node" err="node \"ha-347193-m03\" not found" logger="pod-garbage-collector-controller" node="ha-347193-m03"
	I0920 18:12:47.246341       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-347193-m03"
	I0920 18:12:47.280246       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-347193-m03"
	I0920 18:12:47.280393       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-pccxp"
	I0920 18:12:47.318954       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-pccxp"
	I0920 18:12:47.319061       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-347193-m03"
	I0920 18:12:47.356477       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-347193-m03"
	I0920 18:12:47.356626       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-347193-m03"
	I0920 18:12:47.383511       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-347193-m03"
	I0920 18:12:47.383630       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-347193-m03"
	I0920 18:12:47.418760       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-347193-m03"
	I0920 18:12:47.418850       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5msnk"
	I0920 18:12:47.451605       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5msnk"
	I0920 18:12:47.451696       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-347193-m03"
	I0920 18:12:47.482463       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-347193-m03"
	I0920 18:12:52.364481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	I0920 18:12:57.183708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m04"
	I0920 18:12:57.293632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.14393ms"
	I0920 18:12:57.293718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.26µs"
	I0920 18:12:58.427971       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	I0920 18:12:58.446220       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	I0920 18:13:02.030689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-347193-m02"
	
	
	==> kube-proxy [54d750519756c8ce28de93ae0c1956134f3fadbcadbe8e1714cd9725087490e9] <==
	E0920 18:06:29.899962       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:32.970670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:32.970879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:32.971051       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:32.971089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:36.041655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:36.041891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:39.114686       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:39.114934       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:39.115160       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:39.115461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:42.185608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:42.185761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:48.329358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:48.329510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:51.402655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:51.402870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:06:51.403170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:06:51.403258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:07:09.834636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:07:09.835903       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1694\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:07:12.906022       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:07:12.906761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:07:15.978251       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:07:15.978561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&resourceVersion=1809\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [d376e9770a85a40d5a2f2a68e191b487630df8377fcfcb26b99623bdf354431a] <==
	E0920 18:10:01.870389       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-347193\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0920 18:10:01.873367       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0920 18:10:01.873524       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:10:02.405899       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:10:02.407378       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:10:02.407473       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:10:02.447558       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:10:02.448173       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:10:02.448237       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:10:02.461661       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:10:02.461738       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:10:02.461783       1 config.go:199] "Starting service config controller"
	I0920 18:10:02.461802       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:10:02.462430       1 config.go:328] "Starting node config controller"
	I0920 18:10:02.462490       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0920 18:10:04.937162       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:10:04.938545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-347193&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:10:04.938720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:10:04.938785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:10:04.938894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:10:04.938930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0920 18:10:04.939599       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0920 18:10:05.863404       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:10:06.062843       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:10:06.562881       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6cae0975e4bde26585a0202a59834643e11d51b734c7c4b2b5fd66a7b132cc9f] <==
	W0920 17:58:59.537813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:58:59.537936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.543453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:58:59.543567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:58:59.650341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:58:59.650386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 17:59:01.377349       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 18:02:18.846875       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-t5f94\": pod kindnet-t5f94 is already assigned to node \"ha-347193-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-t5f94" node="ha-347193-m04"
	E0920 18:02:18.847041       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 33dab94e-9da4-4a58-83f6-a7a351c8c216(kube-system/kindnet-t5f94) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-t5f94"
	E0920 18:02:18.847081       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-t5f94\": pod kindnet-t5f94 is already assigned to node \"ha-347193-m04\"" pod="kube-system/kindnet-t5f94"
	I0920 18:02:18.847108       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-t5f94" node="ha-347193-m04"
	E0920 18:07:30.959838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0920 18:07:31.931401       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0920 18:07:32.501085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0920 18:07:34.509511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0920 18:07:35.687201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0920 18:07:36.060126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0920 18:07:36.093018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0920 18:07:36.645207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0920 18:07:36.812022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0920 18:07:36.910109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0920 18:07:37.051528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0920 18:07:37.894179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0920 18:07:38.884995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0920 18:07:40.514451       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f6758e679045b017c3f1bb18c6052183ee469e16e30b035c3a54985570a4731f] <==
	W0920 18:09:58.498224       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.246:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:09:58.498331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.246:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:09:58.603779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.246:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:09:58.603928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.246:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:09:58.834196       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.246:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:09:58.834437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.246:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:09:59.459238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.246:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:09:59.459356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.246:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:09:59.556489       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.246:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:09:59.556548       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.246:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:10:00.059880       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.246:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:10:00.060057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.246:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:10:00.439995       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.246:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:10:00.440071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.246:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:10:01.422414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.246:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:10:01.422543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.246:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:10:01.529973       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.246:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:10:01.530020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.246:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:10:01.820086       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.246:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0920 18:10:01.820163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.246:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.246:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:10:03.826153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 18:10:03.826333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:10:03.826521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:10:03.826593       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0920 18:10:14.001364       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:13:01 ha-347193 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:13:01 ha-347193 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:13:01 ha-347193 kubelet[1310]: E0920 18:13:01.868120    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855981864700189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:13:01 ha-347193 kubelet[1310]: E0920 18:13:01.868166    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855981864700189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:13:11 ha-347193 kubelet[1310]: E0920 18:13:11.872565    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855991871964090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:13:11 ha-347193 kubelet[1310]: E0920 18:13:11.872910    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726855991871964090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:13:21 ha-347193 kubelet[1310]: E0920 18:13:21.875007    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856001873971443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:13:21 ha-347193 kubelet[1310]: E0920 18:13:21.875481    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856001873971443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:13:31 ha-347193 kubelet[1310]: E0920 18:13:31.877337    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856011876219182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:13:31 ha-347193 kubelet[1310]: E0920 18:13:31.877626    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856011876219182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:13:41 ha-347193 kubelet[1310]: E0920 18:13:41.879078    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856021878562742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:13:41 ha-347193 kubelet[1310]: E0920 18:13:41.879127    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856021878562742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:13:51 ha-347193 kubelet[1310]: E0920 18:13:51.881430    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856031880564914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:13:51 ha-347193 kubelet[1310]: E0920 18:13:51.881480    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856031880564914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:14:01 ha-347193 kubelet[1310]: E0920 18:14:01.623881    1310 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:14:01 ha-347193 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:14:01 ha-347193 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:14:01 ha-347193 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:14:01 ha-347193 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:14:01 ha-347193 kubelet[1310]: E0920 18:14:01.882760    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856041882388801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:14:01 ha-347193 kubelet[1310]: E0920 18:14:01.882793    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856041882388801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:14:11 ha-347193 kubelet[1310]: E0920 18:14:11.884265    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856051883980215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:14:11 ha-347193 kubelet[1310]: E0920 18:14:11.884348    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856051883980215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:14:21 ha-347193 kubelet[1310]: E0920 18:14:21.895326    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856061894829352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:14:21 ha-347193 kubelet[1310]: E0920 18:14:21.895752    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856061894829352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:14:25.607061  264569 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19679-237658/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-347193 -n ha-347193
helpers_test.go:261: (dbg) Run:  kubectl --context ha-347193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (326.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-029872
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-029872
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-029872: exit status 82 (2m1.851178896s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-029872-m03"  ...
	* Stopping node "multinode-029872-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-029872" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-029872 --wait=true -v=8 --alsologtostderr
E0920 18:31:34.008836  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:32:29.487325  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:33:30.942595  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-029872 --wait=true -v=8 --alsologtostderr: (3m22.251889722s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-029872
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-029872 -n multinode-029872
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-029872 logs -n 25: (1.492362392s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-029872 cp multinode-029872-m02:/home/docker/cp-test.txt                       | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2317310210/001/cp-test_multinode-029872-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-029872 cp multinode-029872-m02:/home/docker/cp-test.txt                       | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872:/home/docker/cp-test_multinode-029872-m02_multinode-029872.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n multinode-029872 sudo cat                                       | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | /home/docker/cp-test_multinode-029872-m02_multinode-029872.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-029872 cp multinode-029872-m02:/home/docker/cp-test.txt                       | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m03:/home/docker/cp-test_multinode-029872-m02_multinode-029872-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n multinode-029872-m03 sudo cat                                   | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | /home/docker/cp-test_multinode-029872-m02_multinode-029872-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-029872 cp testdata/cp-test.txt                                                | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-029872 cp multinode-029872-m03:/home/docker/cp-test.txt                       | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2317310210/001/cp-test_multinode-029872-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-029872 cp multinode-029872-m03:/home/docker/cp-test.txt                       | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872:/home/docker/cp-test_multinode-029872-m03_multinode-029872.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n multinode-029872 sudo cat                                       | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | /home/docker/cp-test_multinode-029872-m03_multinode-029872.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-029872 cp multinode-029872-m03:/home/docker/cp-test.txt                       | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m02:/home/docker/cp-test_multinode-029872-m03_multinode-029872-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n multinode-029872-m02 sudo cat                                   | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | /home/docker/cp-test_multinode-029872-m03_multinode-029872-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-029872 node stop m03                                                          | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	| node    | multinode-029872 node start                                                             | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:29 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-029872                                                                | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:29 UTC |                     |
	| stop    | -p multinode-029872                                                                     | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:29 UTC |                     |
	| start   | -p multinode-029872                                                                     | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:31 UTC | 20 Sep 24 18:34 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-029872                                                                | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:34 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:31:14
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:31:14.512300  274391 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:31:14.512566  274391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:31:14.512574  274391 out.go:358] Setting ErrFile to fd 2...
	I0920 18:31:14.512578  274391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:31:14.512761  274391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:31:14.513408  274391 out.go:352] Setting JSON to false
	I0920 18:31:14.514534  274391 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8017,"bootTime":1726849057,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:31:14.514671  274391 start.go:139] virtualization: kvm guest
	I0920 18:31:14.518049  274391 out.go:177] * [multinode-029872] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:31:14.519555  274391 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:31:14.519557  274391 notify.go:220] Checking for updates...
	I0920 18:31:14.522094  274391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:31:14.523377  274391 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:31:14.524630  274391 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:31:14.526064  274391 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:31:14.527629  274391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:31:14.529363  274391 config.go:182] Loaded profile config "multinode-029872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:31:14.529456  274391 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:31:14.529987  274391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:31:14.530030  274391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:31:14.547591  274391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43771
	I0920 18:31:14.548116  274391 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:31:14.548758  274391 main.go:141] libmachine: Using API Version  1
	I0920 18:31:14.548783  274391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:31:14.549181  274391 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:31:14.549377  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:31:14.587200  274391 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:31:14.588718  274391 start.go:297] selected driver: kvm2
	I0920 18:31:14.588747  274391 start.go:901] validating driver "kvm2" against &{Name:multinode-029872 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-029872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:31:14.588962  274391 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:31:14.589330  274391 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:31:14.589432  274391 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:31:14.606846  274391 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:31:14.607503  274391 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:31:14.607536  274391 cni.go:84] Creating CNI manager for ""
	I0920 18:31:14.607579  274391 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 18:31:14.607632  274391 start.go:340] cluster config:
	{Name:multinode-029872 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-029872 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflo
w:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:31:14.607785  274391 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:31:14.609837  274391 out.go:177] * Starting "multinode-029872" primary control-plane node in "multinode-029872" cluster
	I0920 18:31:14.611207  274391 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:31:14.611255  274391 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:31:14.611263  274391 cache.go:56] Caching tarball of preloaded images
	I0920 18:31:14.611347  274391 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:31:14.611357  274391 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:31:14.611469  274391 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/config.json ...
	I0920 18:31:14.611679  274391 start.go:360] acquireMachinesLock for multinode-029872: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:31:14.611724  274391 start.go:364] duration metric: took 26.527µs to acquireMachinesLock for "multinode-029872"
	I0920 18:31:14.611738  274391 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:31:14.611746  274391 fix.go:54] fixHost starting: 
	I0920 18:31:14.612032  274391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:31:14.612066  274391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:31:14.627430  274391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43243
	I0920 18:31:14.627962  274391 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:31:14.628497  274391 main.go:141] libmachine: Using API Version  1
	I0920 18:31:14.628523  274391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:31:14.628922  274391 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:31:14.629118  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:31:14.629307  274391 main.go:141] libmachine: (multinode-029872) Calling .GetState
	I0920 18:31:14.631088  274391 fix.go:112] recreateIfNeeded on multinode-029872: state=Running err=<nil>
	W0920 18:31:14.631106  274391 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:31:14.633278  274391 out.go:177] * Updating the running kvm2 "multinode-029872" VM ...
	I0920 18:31:14.634682  274391 machine.go:93] provisionDockerMachine start ...
	I0920 18:31:14.634709  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:31:14.634938  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:31:14.637601  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:14.638113  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:31:14.638147  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:14.638271  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:31:14.638458  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:14.638626  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:14.638761  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:31:14.638931  274391 main.go:141] libmachine: Using SSH client type: native
	I0920 18:31:14.639100  274391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0920 18:31:14.639110  274391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:31:14.756608  274391 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-029872
	
	I0920 18:31:14.756651  274391 main.go:141] libmachine: (multinode-029872) Calling .GetMachineName
	I0920 18:31:14.757048  274391 buildroot.go:166] provisioning hostname "multinode-029872"
	I0920 18:31:14.757082  274391 main.go:141] libmachine: (multinode-029872) Calling .GetMachineName
	I0920 18:31:14.757277  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:31:14.760298  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:14.760700  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:31:14.760741  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:14.760868  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:31:14.761040  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:14.761176  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:14.761317  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:31:14.761498  274391 main.go:141] libmachine: Using SSH client type: native
	I0920 18:31:14.761700  274391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0920 18:31:14.761716  274391 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-029872 && echo "multinode-029872" | sudo tee /etc/hostname
	I0920 18:31:14.896853  274391 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-029872
	
	I0920 18:31:14.896896  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:31:14.900252  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:14.900662  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:31:14.900692  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:14.900925  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:31:14.901143  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:14.901329  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:14.901429  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:31:14.901557  274391 main.go:141] libmachine: Using SSH client type: native
	I0920 18:31:14.901725  274391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0920 18:31:14.901741  274391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-029872' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-029872/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-029872' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:31:15.007312  274391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:31:15.007351  274391 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 18:31:15.007377  274391 buildroot.go:174] setting up certificates
	I0920 18:31:15.007390  274391 provision.go:84] configureAuth start
	I0920 18:31:15.007404  274391 main.go:141] libmachine: (multinode-029872) Calling .GetMachineName
	I0920 18:31:15.007737  274391 main.go:141] libmachine: (multinode-029872) Calling .GetIP
	I0920 18:31:15.011091  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.011461  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:31:15.011492  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.011593  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:31:15.014035  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.014499  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:31:15.014533  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.014681  274391 provision.go:143] copyHostCerts
	I0920 18:31:15.014724  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:31:15.014770  274391 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 18:31:15.014791  274391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:31:15.014973  274391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 18:31:15.015113  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:31:15.015144  274391 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 18:31:15.015152  274391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:31:15.015202  274391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 18:31:15.015271  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:31:15.015300  274391 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 18:31:15.015307  274391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:31:15.015352  274391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 18:31:15.015431  274391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.multinode-029872 san=[127.0.0.1 192.168.39.208 localhost minikube multinode-029872]
	I0920 18:31:15.236001  274391 provision.go:177] copyRemoteCerts
	I0920 18:31:15.236080  274391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:31:15.236106  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:31:15.239425  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.239997  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:31:15.240036  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.240225  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:31:15.240465  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:15.240688  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:31:15.240872  274391 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/multinode-029872/id_rsa Username:docker}
	I0920 18:31:15.319901  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:31:15.319990  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:31:15.345093  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:31:15.345168  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0920 18:31:15.368625  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:31:15.368694  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:31:15.393178  274391 provision.go:87] duration metric: took 385.77142ms to configureAuth
	I0920 18:31:15.393216  274391 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:31:15.393454  274391 config.go:182] Loaded profile config "multinode-029872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:31:15.393565  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:31:15.396696  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.397264  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:31:15.397293  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.397503  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:31:15.397726  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:15.397951  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:15.398167  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:31:15.398359  274391 main.go:141] libmachine: Using SSH client type: native
	I0920 18:31:15.398541  274391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0920 18:31:15.398563  274391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:32:46.158654  274391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:32:46.158694  274391 machine.go:96] duration metric: took 1m31.52399243s to provisionDockerMachine
	I0920 18:32:46.158712  274391 start.go:293] postStartSetup for "multinode-029872" (driver="kvm2")
	I0920 18:32:46.158726  274391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:32:46.158751  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:32:46.159077  274391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:32:46.159109  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:32:46.162214  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.162780  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:32:46.162811  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.163002  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:32:46.163174  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:32:46.163316  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:32:46.163456  274391 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/multinode-029872/id_rsa Username:docker}
	I0920 18:32:46.246439  274391 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:32:46.250595  274391 command_runner.go:130] > NAME=Buildroot
	I0920 18:32:46.250621  274391 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0920 18:32:46.250625  274391 command_runner.go:130] > ID=buildroot
	I0920 18:32:46.250630  274391 command_runner.go:130] > VERSION_ID=2023.02.9
	I0920 18:32:46.250635  274391 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0920 18:32:46.250661  274391 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:32:46.250675  274391 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 18:32:46.250747  274391 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 18:32:46.250822  274391 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 18:32:46.250838  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /etc/ssl/certs/2448492.pem
	I0920 18:32:46.250928  274391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:32:46.260895  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:32:46.287124  274391 start.go:296] duration metric: took 128.392092ms for postStartSetup
	I0920 18:32:46.287183  274391 fix.go:56] duration metric: took 1m31.675436157s for fixHost
	I0920 18:32:46.287211  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:32:46.290280  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.290722  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:32:46.290755  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.290923  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:32:46.291131  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:32:46.291349  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:32:46.291563  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:32:46.291758  274391 main.go:141] libmachine: Using SSH client type: native
	I0920 18:32:46.291939  274391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0920 18:32:46.291949  274391 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:32:46.395028  274391 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726857166.374393175
	
	I0920 18:32:46.395056  274391 fix.go:216] guest clock: 1726857166.374393175
	I0920 18:32:46.395064  274391 fix.go:229] Guest: 2024-09-20 18:32:46.374393175 +0000 UTC Remote: 2024-09-20 18:32:46.287188601 +0000 UTC m=+91.813631050 (delta=87.204574ms)
	I0920 18:32:46.395086  274391 fix.go:200] guest clock delta is within tolerance: 87.204574ms
	I0920 18:32:46.395090  274391 start.go:83] releasing machines lock for "multinode-029872", held for 1m31.78335832s
	I0920 18:32:46.395109  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:32:46.395446  274391 main.go:141] libmachine: (multinode-029872) Calling .GetIP
	I0920 18:32:46.398516  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.398958  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:32:46.398987  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.399183  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:32:46.399825  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:32:46.400072  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:32:46.400156  274391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:32:46.400221  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:32:46.400341  274391 ssh_runner.go:195] Run: cat /version.json
	I0920 18:32:46.400369  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:32:46.403255  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.403434  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.403804  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:32:46.403837  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.403942  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:32:46.404143  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:32:46.404180  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.404154  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:32:46.404366  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:32:46.404370  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:32:46.404593  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:32:46.404609  274391 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/multinode-029872/id_rsa Username:docker}
	I0920 18:32:46.404781  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:32:46.404904  274391 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/multinode-029872/id_rsa Username:docker}
	I0920 18:32:46.478609  274391 command_runner.go:130] > {"iso_version": "v1.34.0-1726481713-19649", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "fcd4ba3dbb1ef408e3a4b79c864df2496ddd3848"}
	I0920 18:32:46.478766  274391 ssh_runner.go:195] Run: systemctl --version
	I0920 18:32:46.520173  274391 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0920 18:32:46.520913  274391 command_runner.go:130] > systemd 252 (252)
	I0920 18:32:46.520946  274391 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0920 18:32:46.521002  274391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:32:46.681155  274391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 18:32:46.688485  274391 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0920 18:32:46.688921  274391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:32:46.688995  274391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:32:46.698081  274391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 18:32:46.698109  274391 start.go:495] detecting cgroup driver to use...
	I0920 18:32:46.698170  274391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:32:46.715642  274391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:32:46.729899  274391 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:32:46.729985  274391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:32:46.744093  274391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:32:46.757403  274391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:32:46.906815  274391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:32:47.063516  274391 docker.go:233] disabling docker service ...
	I0920 18:32:47.063603  274391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:32:47.083329  274391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:32:47.096999  274391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:32:47.253856  274391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:32:47.414344  274391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:32:47.427978  274391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:32:47.447426  274391 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0920 18:32:47.447473  274391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:32:47.447519  274391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:32:47.457457  274391 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:32:47.457526  274391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:32:47.467676  274391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:32:47.477545  274391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:32:47.487270  274391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:32:47.497375  274391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:32:47.507026  274391 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:32:47.517243  274391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:32:47.527032  274391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:32:47.535601  274391 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0920 18:32:47.535919  274391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:32:47.545102  274391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:32:47.680908  274391 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:32:52.425651  274391 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.744685776s)
	I0920 18:32:52.425687  274391 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:32:52.425746  274391 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:32:52.430418  274391 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0920 18:32:52.430444  274391 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0920 18:32:52.430451  274391 command_runner.go:130] > Device: 0,22	Inode: 1324        Links: 1
	I0920 18:32:52.430458  274391 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 18:32:52.430463  274391 command_runner.go:130] > Access: 2024-09-20 18:32:52.288300714 +0000
	I0920 18:32:52.430469  274391 command_runner.go:130] > Modify: 2024-09-20 18:32:52.288300714 +0000
	I0920 18:32:52.430476  274391 command_runner.go:130] > Change: 2024-09-20 18:32:52.288300714 +0000
	I0920 18:32:52.430482  274391 command_runner.go:130] >  Birth: -
	I0920 18:32:52.430504  274391 start.go:563] Will wait 60s for crictl version
	I0920 18:32:52.430557  274391 ssh_runner.go:195] Run: which crictl
	I0920 18:32:52.434188  274391 command_runner.go:130] > /usr/bin/crictl
	I0920 18:32:52.434250  274391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:32:52.473166  274391 command_runner.go:130] > Version:  0.1.0
	I0920 18:32:52.473197  274391 command_runner.go:130] > RuntimeName:  cri-o
	I0920 18:32:52.473204  274391 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0920 18:32:52.473212  274391 command_runner.go:130] > RuntimeApiVersion:  v1
	I0920 18:32:52.473289  274391 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:32:52.473362  274391 ssh_runner.go:195] Run: crio --version
	I0920 18:32:52.498836  274391 command_runner.go:130] > crio version 1.29.1
	I0920 18:32:52.498867  274391 command_runner.go:130] > Version:        1.29.1
	I0920 18:32:52.498873  274391 command_runner.go:130] > GitCommit:      unknown
	I0920 18:32:52.498879  274391 command_runner.go:130] > GitCommitDate:  unknown
	I0920 18:32:52.498885  274391 command_runner.go:130] > GitTreeState:   clean
	I0920 18:32:52.498892  274391 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0920 18:32:52.498897  274391 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 18:32:52.498900  274391 command_runner.go:130] > Compiler:       gc
	I0920 18:32:52.498905  274391 command_runner.go:130] > Platform:       linux/amd64
	I0920 18:32:52.498916  274391 command_runner.go:130] > Linkmode:       dynamic
	I0920 18:32:52.498921  274391 command_runner.go:130] > BuildTags:      
	I0920 18:32:52.498925  274391 command_runner.go:130] >   containers_image_ostree_stub
	I0920 18:32:52.498936  274391 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 18:32:52.498943  274391 command_runner.go:130] >   btrfs_noversion
	I0920 18:32:52.498950  274391 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 18:32:52.498957  274391 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 18:32:52.498965  274391 command_runner.go:130] >   seccomp
	I0920 18:32:52.498973  274391 command_runner.go:130] > LDFlags:          unknown
	I0920 18:32:52.498982  274391 command_runner.go:130] > SeccompEnabled:   true
	I0920 18:32:52.498989  274391 command_runner.go:130] > AppArmorEnabled:  false
	I0920 18:32:52.500295  274391 ssh_runner.go:195] Run: crio --version
	I0920 18:32:52.528483  274391 command_runner.go:130] > crio version 1.29.1
	I0920 18:32:52.528517  274391 command_runner.go:130] > Version:        1.29.1
	I0920 18:32:52.528525  274391 command_runner.go:130] > GitCommit:      unknown
	I0920 18:32:52.528532  274391 command_runner.go:130] > GitCommitDate:  unknown
	I0920 18:32:52.528537  274391 command_runner.go:130] > GitTreeState:   clean
	I0920 18:32:52.528546  274391 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0920 18:32:52.528552  274391 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 18:32:52.528556  274391 command_runner.go:130] > Compiler:       gc
	I0920 18:32:52.528561  274391 command_runner.go:130] > Platform:       linux/amd64
	I0920 18:32:52.528570  274391 command_runner.go:130] > Linkmode:       dynamic
	I0920 18:32:52.528577  274391 command_runner.go:130] > BuildTags:      
	I0920 18:32:52.528583  274391 command_runner.go:130] >   containers_image_ostree_stub
	I0920 18:32:52.528589  274391 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 18:32:52.528595  274391 command_runner.go:130] >   btrfs_noversion
	I0920 18:32:52.528601  274391 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 18:32:52.528609  274391 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 18:32:52.528615  274391 command_runner.go:130] >   seccomp
	I0920 18:32:52.528626  274391 command_runner.go:130] > LDFlags:          unknown
	I0920 18:32:52.528632  274391 command_runner.go:130] > SeccompEnabled:   true
	I0920 18:32:52.528637  274391 command_runner.go:130] > AppArmorEnabled:  false
	I0920 18:32:52.530748  274391 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:32:52.532212  274391 main.go:141] libmachine: (multinode-029872) Calling .GetIP
	I0920 18:32:52.535002  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:52.535487  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:32:52.535520  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:52.535764  274391 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:32:52.539998  274391 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0920 18:32:52.540223  274391 kubeadm.go:883] updating cluster {Name:multinode-029872 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-029872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadg
et:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:32:52.540380  274391 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:32:52.540428  274391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:32:52.582828  274391 command_runner.go:130] > {
	I0920 18:32:52.582862  274391 command_runner.go:130] >   "images": [
	I0920 18:32:52.582869  274391 command_runner.go:130] >     {
	I0920 18:32:52.582881  274391 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 18:32:52.582888  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.582897  274391 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 18:32:52.582902  274391 command_runner.go:130] >       ],
	I0920 18:32:52.582909  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.582921  274391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 18:32:52.582955  274391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 18:32:52.582963  274391 command_runner.go:130] >       ],
	I0920 18:32:52.582968  274391 command_runner.go:130] >       "size": "87190579",
	I0920 18:32:52.582972  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.582976  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.582984  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.582992  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.582995  274391 command_runner.go:130] >     },
	I0920 18:32:52.582999  274391 command_runner.go:130] >     {
	I0920 18:32:52.583004  274391 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 18:32:52.583008  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583013  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 18:32:52.583017  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583021  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583028  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 18:32:52.583037  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 18:32:52.583041  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583046  274391 command_runner.go:130] >       "size": "1363676",
	I0920 18:32:52.583050  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.583058  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583067  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583073  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583083  274391 command_runner.go:130] >     },
	I0920 18:32:52.583089  274391 command_runner.go:130] >     {
	I0920 18:32:52.583101  274391 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 18:32:52.583110  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583119  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 18:32:52.583127  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583133  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583145  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 18:32:52.583160  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 18:32:52.583167  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583175  274391 command_runner.go:130] >       "size": "31470524",
	I0920 18:32:52.583182  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.583188  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583194  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583202  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583205  274391 command_runner.go:130] >     },
	I0920 18:32:52.583209  274391 command_runner.go:130] >     {
	I0920 18:32:52.583216  274391 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 18:32:52.583220  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583225  274391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 18:32:52.583229  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583235  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583245  274391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 18:32:52.583262  274391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 18:32:52.583267  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583272  274391 command_runner.go:130] >       "size": "63273227",
	I0920 18:32:52.583277  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.583281  274391 command_runner.go:130] >       "username": "nonroot",
	I0920 18:32:52.583284  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583288  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583292  274391 command_runner.go:130] >     },
	I0920 18:32:52.583295  274391 command_runner.go:130] >     {
	I0920 18:32:52.583301  274391 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 18:32:52.583307  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583312  274391 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 18:32:52.583317  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583321  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583330  274391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 18:32:52.583339  274391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 18:32:52.583343  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583347  274391 command_runner.go:130] >       "size": "149009664",
	I0920 18:32:52.583350  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.583357  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.583360  274391 command_runner.go:130] >       },
	I0920 18:32:52.583364  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583368  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583372  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583376  274391 command_runner.go:130] >     },
	I0920 18:32:52.583379  274391 command_runner.go:130] >     {
	I0920 18:32:52.583385  274391 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 18:32:52.583391  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583396  274391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 18:32:52.583401  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583405  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583412  274391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 18:32:52.583418  274391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 18:32:52.583422  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583425  274391 command_runner.go:130] >       "size": "95237600",
	I0920 18:32:52.583429  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.583434  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.583436  274391 command_runner.go:130] >       },
	I0920 18:32:52.583440  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583443  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583447  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583450  274391 command_runner.go:130] >     },
	I0920 18:32:52.583453  274391 command_runner.go:130] >     {
	I0920 18:32:52.583459  274391 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 18:32:52.583462  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583467  274391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 18:32:52.583471  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583474  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583481  274391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 18:32:52.583490  274391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 18:32:52.583494  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583498  274391 command_runner.go:130] >       "size": "89437508",
	I0920 18:32:52.583501  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.583507  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.583513  274391 command_runner.go:130] >       },
	I0920 18:32:52.583517  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583521  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583525  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583528  274391 command_runner.go:130] >     },
	I0920 18:32:52.583531  274391 command_runner.go:130] >     {
	I0920 18:32:52.583537  274391 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 18:32:52.583543  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583547  274391 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 18:32:52.583550  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583555  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583572  274391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 18:32:52.583581  274391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 18:32:52.583585  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583589  274391 command_runner.go:130] >       "size": "92733849",
	I0920 18:32:52.583594  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.583598  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583601  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583605  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583608  274391 command_runner.go:130] >     },
	I0920 18:32:52.583611  274391 command_runner.go:130] >     {
	I0920 18:32:52.583621  274391 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 18:32:52.583625  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583629  274391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 18:32:52.583632  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583636  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583643  274391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 18:32:52.583649  274391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 18:32:52.583653  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583660  274391 command_runner.go:130] >       "size": "68420934",
	I0920 18:32:52.583664  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.583668  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.583672  274391 command_runner.go:130] >       },
	I0920 18:32:52.583676  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583679  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583683  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583686  274391 command_runner.go:130] >     },
	I0920 18:32:52.583689  274391 command_runner.go:130] >     {
	I0920 18:32:52.583695  274391 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 18:32:52.583701  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583705  274391 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 18:32:52.583709  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583713  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583719  274391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 18:32:52.583728  274391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 18:32:52.583732  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583736  274391 command_runner.go:130] >       "size": "742080",
	I0920 18:32:52.583740  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.583746  274391 command_runner.go:130] >         "value": "65535"
	I0920 18:32:52.583751  274391 command_runner.go:130] >       },
	I0920 18:32:52.583754  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583758  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583762  274391 command_runner.go:130] >       "pinned": true
	I0920 18:32:52.583765  274391 command_runner.go:130] >     }
	I0920 18:32:52.583769  274391 command_runner.go:130] >   ]
	I0920 18:32:52.583772  274391 command_runner.go:130] > }
	I0920 18:32:52.583997  274391 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:32:52.584013  274391 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:32:52.584075  274391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:32:52.615543  274391 command_runner.go:130] > {
	I0920 18:32:52.615569  274391 command_runner.go:130] >   "images": [
	I0920 18:32:52.615575  274391 command_runner.go:130] >     {
	I0920 18:32:52.615586  274391 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 18:32:52.615600  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.615608  274391 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 18:32:52.615613  274391 command_runner.go:130] >       ],
	I0920 18:32:52.615619  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.615631  274391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 18:32:52.615641  274391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 18:32:52.615646  274391 command_runner.go:130] >       ],
	I0920 18:32:52.615653  274391 command_runner.go:130] >       "size": "87190579",
	I0920 18:32:52.615664  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.615673  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.615686  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.615694  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.615700  274391 command_runner.go:130] >     },
	I0920 18:32:52.615706  274391 command_runner.go:130] >     {
	I0920 18:32:52.615716  274391 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 18:32:52.615725  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.615733  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 18:32:52.615739  274391 command_runner.go:130] >       ],
	I0920 18:32:52.615749  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.615764  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 18:32:52.615780  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 18:32:52.615789  274391 command_runner.go:130] >       ],
	I0920 18:32:52.615796  274391 command_runner.go:130] >       "size": "1363676",
	I0920 18:32:52.615805  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.615816  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.615825  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.615832  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.615840  274391 command_runner.go:130] >     },
	I0920 18:32:52.615845  274391 command_runner.go:130] >     {
	I0920 18:32:52.615854  274391 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 18:32:52.615863  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.615876  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 18:32:52.615885  274391 command_runner.go:130] >       ],
	I0920 18:32:52.615892  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.615908  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 18:32:52.615924  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 18:32:52.615932  274391 command_runner.go:130] >       ],
	I0920 18:32:52.615946  274391 command_runner.go:130] >       "size": "31470524",
	I0920 18:32:52.615955  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.615962  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.615972  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.615981  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.615989  274391 command_runner.go:130] >     },
	I0920 18:32:52.615996  274391 command_runner.go:130] >     {
	I0920 18:32:52.616009  274391 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 18:32:52.616018  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.616028  274391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 18:32:52.616036  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616044  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.616060  274391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 18:32:52.616080  274391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 18:32:52.616092  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616103  274391 command_runner.go:130] >       "size": "63273227",
	I0920 18:32:52.616113  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.616123  274391 command_runner.go:130] >       "username": "nonroot",
	I0920 18:32:52.616131  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.616140  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.616146  274391 command_runner.go:130] >     },
	I0920 18:32:52.616154  274391 command_runner.go:130] >     {
	I0920 18:32:52.616166  274391 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 18:32:52.616176  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.616186  274391 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 18:32:52.616194  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616202  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.616217  274391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 18:32:52.616231  274391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 18:32:52.616239  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616247  274391 command_runner.go:130] >       "size": "149009664",
	I0920 18:32:52.616256  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.616264  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.616273  274391 command_runner.go:130] >       },
	I0920 18:32:52.616282  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.616292  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.616302  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.616308  274391 command_runner.go:130] >     },
	I0920 18:32:52.616314  274391 command_runner.go:130] >     {
	I0920 18:32:52.616326  274391 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 18:32:52.616336  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.616347  274391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 18:32:52.616355  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616362  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.616377  274391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 18:32:52.616393  274391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 18:32:52.616402  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616411  274391 command_runner.go:130] >       "size": "95237600",
	I0920 18:32:52.616420  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.616428  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.616436  274391 command_runner.go:130] >       },
	I0920 18:32:52.616443  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.616452  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.616462  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.616467  274391 command_runner.go:130] >     },
	I0920 18:32:52.616473  274391 command_runner.go:130] >     {
	I0920 18:32:52.616485  274391 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 18:32:52.616501  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.616513  274391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 18:32:52.616521  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616529  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.616544  274391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 18:32:52.616560  274391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 18:32:52.616570  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616579  274391 command_runner.go:130] >       "size": "89437508",
	I0920 18:32:52.616588  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.616622  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.616628  274391 command_runner.go:130] >       },
	I0920 18:32:52.616634  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.616640  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.616650  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.616656  274391 command_runner.go:130] >     },
	I0920 18:32:52.616662  274391 command_runner.go:130] >     {
	I0920 18:32:52.616673  274391 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 18:32:52.616683  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.616693  274391 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 18:32:52.616701  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616709  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.616737  274391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 18:32:52.616753  274391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 18:32:52.616763  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616777  274391 command_runner.go:130] >       "size": "92733849",
	I0920 18:32:52.616787  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.616796  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.616804  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.616813  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.616822  274391 command_runner.go:130] >     },
	I0920 18:32:52.616831  274391 command_runner.go:130] >     {
	I0920 18:32:52.616842  274391 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 18:32:52.616851  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.616860  274391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 18:32:52.616868  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616875  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.616891  274391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 18:32:52.616907  274391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 18:32:52.616915  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616924  274391 command_runner.go:130] >       "size": "68420934",
	I0920 18:32:52.616933  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.616940  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.616948  274391 command_runner.go:130] >       },
	I0920 18:32:52.616956  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.616965  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.616972  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.616980  274391 command_runner.go:130] >     },
	I0920 18:32:52.616987  274391 command_runner.go:130] >     {
	I0920 18:32:52.616998  274391 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 18:32:52.617007  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.617016  274391 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 18:32:52.617023  274391 command_runner.go:130] >       ],
	I0920 18:32:52.617031  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.617045  274391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 18:32:52.617060  274391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 18:32:52.617069  274391 command_runner.go:130] >       ],
	I0920 18:32:52.617078  274391 command_runner.go:130] >       "size": "742080",
	I0920 18:32:52.617088  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.617098  274391 command_runner.go:130] >         "value": "65535"
	I0920 18:32:52.617104  274391 command_runner.go:130] >       },
	I0920 18:32:52.617112  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.617122  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.617130  274391 command_runner.go:130] >       "pinned": true
	I0920 18:32:52.617137  274391 command_runner.go:130] >     }
	I0920 18:32:52.617144  274391 command_runner.go:130] >   ]
	I0920 18:32:52.617150  274391 command_runner.go:130] > }
	I0920 18:32:52.617272  274391 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:32:52.617285  274391 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:32:52.617296  274391 kubeadm.go:934] updating node { 192.168.39.208 8443 v1.31.1 crio true true} ...
	I0920 18:32:52.617409  274391 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-029872 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-029872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:32:52.617493  274391 ssh_runner.go:195] Run: crio config
	I0920 18:32:52.656454  274391 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0920 18:32:52.656485  274391 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0920 18:32:52.656495  274391 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0920 18:32:52.656499  274391 command_runner.go:130] > #
	I0920 18:32:52.656509  274391 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0920 18:32:52.656516  274391 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0920 18:32:52.656526  274391 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0920 18:32:52.656535  274391 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0920 18:32:52.656540  274391 command_runner.go:130] > # reload'.
	I0920 18:32:52.656551  274391 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0920 18:32:52.656562  274391 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0920 18:32:52.656575  274391 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0920 18:32:52.656585  274391 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0920 18:32:52.656593  274391 command_runner.go:130] > [crio]
	I0920 18:32:52.656606  274391 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0920 18:32:52.656615  274391 command_runner.go:130] > # containers images, in this directory.
	I0920 18:32:52.656626  274391 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0920 18:32:52.656641  274391 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0920 18:32:52.656724  274391 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0920 18:32:52.656752  274391 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0920 18:32:52.656763  274391 command_runner.go:130] > # imagestore = ""
	I0920 18:32:52.656777  274391 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0920 18:32:52.656787  274391 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0920 18:32:52.656949  274391 command_runner.go:130] > storage_driver = "overlay"
	I0920 18:32:52.656964  274391 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0920 18:32:52.656972  274391 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0920 18:32:52.656980  274391 command_runner.go:130] > storage_option = [
	I0920 18:32:52.657110  274391 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0920 18:32:52.657137  274391 command_runner.go:130] > ]
	I0920 18:32:52.657153  274391 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0920 18:32:52.657167  274391 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0920 18:32:52.657356  274391 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0920 18:32:52.657377  274391 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0920 18:32:52.657387  274391 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0920 18:32:52.657395  274391 command_runner.go:130] > # always happen on a node reboot
	I0920 18:32:52.657600  274391 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0920 18:32:52.657616  274391 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0920 18:32:52.657622  274391 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0920 18:32:52.657627  274391 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0920 18:32:52.657747  274391 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0920 18:32:52.657762  274391 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0920 18:32:52.657772  274391 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0920 18:32:52.658236  274391 command_runner.go:130] > # internal_wipe = true
	I0920 18:32:52.658248  274391 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0920 18:32:52.658254  274391 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0920 18:32:52.658628  274391 command_runner.go:130] > # internal_repair = false
	I0920 18:32:52.658643  274391 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0920 18:32:52.658653  274391 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0920 18:32:52.658659  274391 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0920 18:32:52.658935  274391 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0920 18:32:52.658948  274391 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0920 18:32:52.658955  274391 command_runner.go:130] > [crio.api]
	I0920 18:32:52.658960  274391 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0920 18:32:52.659181  274391 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0920 18:32:52.659191  274391 command_runner.go:130] > # IP address on which the stream server will listen.
	I0920 18:32:52.659332  274391 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0920 18:32:52.659347  274391 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0920 18:32:52.659355  274391 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0920 18:32:52.659543  274391 command_runner.go:130] > # stream_port = "0"
	I0920 18:32:52.659561  274391 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0920 18:32:52.659773  274391 command_runner.go:130] > # stream_enable_tls = false
	I0920 18:32:52.659788  274391 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0920 18:32:52.660122  274391 command_runner.go:130] > # stream_idle_timeout = ""
	I0920 18:32:52.660134  274391 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0920 18:32:52.660140  274391 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0920 18:32:52.660143  274391 command_runner.go:130] > # minutes.
	I0920 18:32:52.660276  274391 command_runner.go:130] > # stream_tls_cert = ""
	I0920 18:32:52.660296  274391 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0920 18:32:52.660307  274391 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0920 18:32:52.660527  274391 command_runner.go:130] > # stream_tls_key = ""
	I0920 18:32:52.660550  274391 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0920 18:32:52.660561  274391 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0920 18:32:52.660580  274391 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0920 18:32:52.660657  274391 command_runner.go:130] > # stream_tls_ca = ""
	I0920 18:32:52.660684  274391 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 18:32:52.660798  274391 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0920 18:32:52.660814  274391 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 18:32:52.660980  274391 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0920 18:32:52.661001  274391 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0920 18:32:52.661011  274391 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0920 18:32:52.661018  274391 command_runner.go:130] > [crio.runtime]
	I0920 18:32:52.661028  274391 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0920 18:32:52.661041  274391 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0920 18:32:52.661050  274391 command_runner.go:130] > # "nofile=1024:2048"
	I0920 18:32:52.661060  274391 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0920 18:32:52.661080  274391 command_runner.go:130] > # default_ulimits = [
	I0920 18:32:52.661214  274391 command_runner.go:130] > # ]
	I0920 18:32:52.661234  274391 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0920 18:32:52.661444  274391 command_runner.go:130] > # no_pivot = false
	I0920 18:32:52.661461  274391 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0920 18:32:52.661472  274391 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0920 18:32:52.661693  274391 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0920 18:32:52.661706  274391 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0920 18:32:52.661711  274391 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0920 18:32:52.661717  274391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 18:32:52.661812  274391 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0920 18:32:52.661820  274391 command_runner.go:130] > # Cgroup setting for conmon
	I0920 18:32:52.661827  274391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0920 18:32:52.662052  274391 command_runner.go:130] > conmon_cgroup = "pod"
	I0920 18:32:52.662066  274391 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0920 18:32:52.662071  274391 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0920 18:32:52.662078  274391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 18:32:52.662082  274391 command_runner.go:130] > conmon_env = [
	I0920 18:32:52.662177  274391 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 18:32:52.662186  274391 command_runner.go:130] > ]
	I0920 18:32:52.662195  274391 command_runner.go:130] > # Additional environment variables to set for all the
	I0920 18:32:52.662203  274391 command_runner.go:130] > # containers. These are overridden if set in the
	I0920 18:32:52.662213  274391 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0920 18:32:52.662293  274391 command_runner.go:130] > # default_env = [
	I0920 18:32:52.662409  274391 command_runner.go:130] > # ]
	I0920 18:32:52.662418  274391 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0920 18:32:52.662425  274391 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0920 18:32:52.662639  274391 command_runner.go:130] > # selinux = false
	I0920 18:32:52.662657  274391 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0920 18:32:52.662668  274391 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0920 18:32:52.662677  274391 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0920 18:32:52.662842  274391 command_runner.go:130] > # seccomp_profile = ""
	I0920 18:32:52.662858  274391 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0920 18:32:52.662873  274391 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0920 18:32:52.662886  274391 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0920 18:32:52.662896  274391 command_runner.go:130] > # which might increase security.
	I0920 18:32:52.662907  274391 command_runner.go:130] > # This option is currently deprecated,
	I0920 18:32:52.662916  274391 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0920 18:32:52.663027  274391 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0920 18:32:52.663043  274391 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0920 18:32:52.663054  274391 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0920 18:32:52.663064  274391 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0920 18:32:52.663078  274391 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0920 18:32:52.663088  274391 command_runner.go:130] > # This option supports live configuration reload.
	I0920 18:32:52.663259  274391 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0920 18:32:52.663272  274391 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0920 18:32:52.663281  274391 command_runner.go:130] > # the cgroup blockio controller.
	I0920 18:32:52.663406  274391 command_runner.go:130] > # blockio_config_file = ""
	I0920 18:32:52.663419  274391 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0920 18:32:52.663424  274391 command_runner.go:130] > # blockio parameters.
	I0920 18:32:52.663645  274391 command_runner.go:130] > # blockio_reload = false
	I0920 18:32:52.663664  274391 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0920 18:32:52.663670  274391 command_runner.go:130] > # irqbalance daemon.
	I0920 18:32:52.663966  274391 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0920 18:32:52.663977  274391 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0920 18:32:52.663984  274391 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0920 18:32:52.663995  274391 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0920 18:32:52.664005  274391 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0920 18:32:52.664018  274391 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0920 18:32:52.664026  274391 command_runner.go:130] > # This option supports live configuration reload.
	I0920 18:32:52.664037  274391 command_runner.go:130] > # rdt_config_file = ""
	I0920 18:32:52.664048  274391 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0920 18:32:52.664053  274391 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0920 18:32:52.664074  274391 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0920 18:32:52.664089  274391 command_runner.go:130] > # separate_pull_cgroup = ""
	I0920 18:32:52.664101  274391 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0920 18:32:52.664113  274391 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0920 18:32:52.664123  274391 command_runner.go:130] > # will be added.
	I0920 18:32:52.664128  274391 command_runner.go:130] > # default_capabilities = [
	I0920 18:32:52.664134  274391 command_runner.go:130] > # 	"CHOWN",
	I0920 18:32:52.664149  274391 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0920 18:32:52.664155  274391 command_runner.go:130] > # 	"FSETID",
	I0920 18:32:52.664161  274391 command_runner.go:130] > # 	"FOWNER",
	I0920 18:32:52.664167  274391 command_runner.go:130] > # 	"SETGID",
	I0920 18:32:52.664174  274391 command_runner.go:130] > # 	"SETUID",
	I0920 18:32:52.664180  274391 command_runner.go:130] > # 	"SETPCAP",
	I0920 18:32:52.664191  274391 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0920 18:32:52.664197  274391 command_runner.go:130] > # 	"KILL",
	I0920 18:32:52.664206  274391 command_runner.go:130] > # ]
	I0920 18:32:52.664218  274391 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0920 18:32:52.664233  274391 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0920 18:32:52.664245  274391 command_runner.go:130] > # add_inheritable_capabilities = false
	I0920 18:32:52.664255  274391 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0920 18:32:52.664268  274391 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 18:32:52.664276  274391 command_runner.go:130] > default_sysctls = [
	I0920 18:32:52.664285  274391 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0920 18:32:52.664292  274391 command_runner.go:130] > ]
	I0920 18:32:52.664300  274391 command_runner.go:130] > # List of devices on the host that a
	I0920 18:32:52.664313  274391 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0920 18:32:52.664322  274391 command_runner.go:130] > # allowed_devices = [
	I0920 18:32:52.664329  274391 command_runner.go:130] > # 	"/dev/fuse",
	I0920 18:32:52.664335  274391 command_runner.go:130] > # ]
	I0920 18:32:52.664343  274391 command_runner.go:130] > # List of additional devices. specified as
	I0920 18:32:52.664356  274391 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0920 18:32:52.664367  274391 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0920 18:32:52.664381  274391 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 18:32:52.664387  274391 command_runner.go:130] > # additional_devices = [
	I0920 18:32:52.664396  274391 command_runner.go:130] > # ]
	I0920 18:32:52.664405  274391 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0920 18:32:52.664414  274391 command_runner.go:130] > # cdi_spec_dirs = [
	I0920 18:32:52.664420  274391 command_runner.go:130] > # 	"/etc/cdi",
	I0920 18:32:52.664431  274391 command_runner.go:130] > # 	"/var/run/cdi",
	I0920 18:32:52.664438  274391 command_runner.go:130] > # ]
	I0920 18:32:52.664451  274391 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0920 18:32:52.664463  274391 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0920 18:32:52.664474  274391 command_runner.go:130] > # Defaults to false.
	I0920 18:32:52.664482  274391 command_runner.go:130] > # device_ownership_from_security_context = false
	I0920 18:32:52.664495  274391 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0920 18:32:52.664507  274391 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0920 18:32:52.664516  274391 command_runner.go:130] > # hooks_dir = [
	I0920 18:32:52.664524  274391 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0920 18:32:52.664533  274391 command_runner.go:130] > # ]
	I0920 18:32:52.664542  274391 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0920 18:32:52.664558  274391 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0920 18:32:52.664570  274391 command_runner.go:130] > # its default mounts from the following two files:
	I0920 18:32:52.664576  274391 command_runner.go:130] > #
	I0920 18:32:52.664585  274391 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0920 18:32:52.664598  274391 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0920 18:32:52.664616  274391 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0920 18:32:52.664624  274391 command_runner.go:130] > #
	I0920 18:32:52.664634  274391 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0920 18:32:52.664647  274391 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0920 18:32:52.664659  274391 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0920 18:32:52.664671  274391 command_runner.go:130] > #      only add mounts it finds in this file.
	I0920 18:32:52.664676  274391 command_runner.go:130] > #
	I0920 18:32:52.664686  274391 command_runner.go:130] > # default_mounts_file = ""
	I0920 18:32:52.664694  274391 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0920 18:32:52.664708  274391 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0920 18:32:52.664718  274391 command_runner.go:130] > pids_limit = 1024
	I0920 18:32:52.664728  274391 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0920 18:32:52.664740  274391 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0920 18:32:52.664753  274391 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0920 18:32:52.664766  274391 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0920 18:32:52.664775  274391 command_runner.go:130] > # log_size_max = -1
	I0920 18:32:52.664787  274391 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0920 18:32:52.664798  274391 command_runner.go:130] > # log_to_journald = false
	I0920 18:32:52.664809  274391 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0920 18:32:52.664821  274391 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0920 18:32:52.664832  274391 command_runner.go:130] > # Path to directory for container attach sockets.
	I0920 18:32:52.664841  274391 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0920 18:32:52.664852  274391 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0920 18:32:52.664861  274391 command_runner.go:130] > # bind_mount_prefix = ""
	I0920 18:32:52.664870  274391 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0920 18:32:52.664879  274391 command_runner.go:130] > # read_only = false
	I0920 18:32:52.664889  274391 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0920 18:32:52.664902  274391 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0920 18:32:52.664912  274391 command_runner.go:130] > # live configuration reload.
	I0920 18:32:52.664919  274391 command_runner.go:130] > # log_level = "info"
	I0920 18:32:52.664931  274391 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0920 18:32:52.664940  274391 command_runner.go:130] > # This option supports live configuration reload.
	I0920 18:32:52.664947  274391 command_runner.go:130] > # log_filter = ""
	I0920 18:32:52.664958  274391 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0920 18:32:52.664971  274391 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0920 18:32:52.664978  274391 command_runner.go:130] > # separated by comma.
	I0920 18:32:52.664993  274391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 18:32:52.665002  274391 command_runner.go:130] > # uid_mappings = ""
	I0920 18:32:52.665016  274391 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0920 18:32:52.665029  274391 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0920 18:32:52.665039  274391 command_runner.go:130] > # separated by comma.
	I0920 18:32:52.665050  274391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 18:32:52.665060  274391 command_runner.go:130] > # gid_mappings = ""
	I0920 18:32:52.665070  274391 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0920 18:32:52.665083  274391 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 18:32:52.665094  274391 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 18:32:52.665109  274391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 18:32:52.665119  274391 command_runner.go:130] > # minimum_mappable_uid = -1
	I0920 18:32:52.665129  274391 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0920 18:32:52.665140  274391 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 18:32:52.665151  274391 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 18:32:52.665165  274391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 18:32:52.665172  274391 command_runner.go:130] > # minimum_mappable_gid = -1
	I0920 18:32:52.665205  274391 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0920 18:32:52.665220  274391 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0920 18:32:52.665228  274391 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0920 18:32:52.665234  274391 command_runner.go:130] > # ctr_stop_timeout = 30
	I0920 18:32:52.665247  274391 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0920 18:32:52.665259  274391 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0920 18:32:52.665270  274391 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0920 18:32:52.665278  274391 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0920 18:32:52.665287  274391 command_runner.go:130] > drop_infra_ctr = false
	I0920 18:32:52.665297  274391 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0920 18:32:52.665309  274391 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0920 18:32:52.665324  274391 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0920 18:32:52.665333  274391 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0920 18:32:52.665344  274391 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0920 18:32:52.665361  274391 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0920 18:32:52.665374  274391 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0920 18:32:52.665383  274391 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0920 18:32:52.665392  274391 command_runner.go:130] > # shared_cpuset = ""
	I0920 18:32:52.665403  274391 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0920 18:32:52.665413  274391 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0920 18:32:52.665423  274391 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0920 18:32:52.665435  274391 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0920 18:32:52.665455  274391 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0920 18:32:52.665468  274391 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0920 18:32:52.665482  274391 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0920 18:32:52.665490  274391 command_runner.go:130] > # enable_criu_support = false
	I0920 18:32:52.665500  274391 command_runner.go:130] > # Enable/disable the generation of the container,
	I0920 18:32:52.665510  274391 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0920 18:32:52.665519  274391 command_runner.go:130] > # enable_pod_events = false
	I0920 18:32:52.665528  274391 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 18:32:52.665538  274391 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 18:32:52.665549  274391 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0920 18:32:52.665562  274391 command_runner.go:130] > # default_runtime = "runc"
	I0920 18:32:52.665573  274391 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0920 18:32:52.665588  274391 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0920 18:32:52.665605  274391 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0920 18:32:52.665620  274391 command_runner.go:130] > # creation as a file is not desired either.
	I0920 18:32:52.665637  274391 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0920 18:32:52.665648  274391 command_runner.go:130] > # the hostname is being managed dynamically.
	I0920 18:32:52.665656  274391 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0920 18:32:52.665663  274391 command_runner.go:130] > # ]
	I0920 18:32:52.665673  274391 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0920 18:32:52.665687  274391 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0920 18:32:52.665697  274391 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0920 18:32:52.665702  274391 command_runner.go:130] > # Each entry in the table should follow the format:
	I0920 18:32:52.665705  274391 command_runner.go:130] > #
	I0920 18:32:52.665717  274391 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0920 18:32:52.665725  274391 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0920 18:32:52.665753  274391 command_runner.go:130] > # runtime_type = "oci"
	I0920 18:32:52.665764  274391 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0920 18:32:52.665772  274391 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0920 18:32:52.665781  274391 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0920 18:32:52.665785  274391 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0920 18:32:52.665793  274391 command_runner.go:130] > # monitor_env = []
	I0920 18:32:52.665801  274391 command_runner.go:130] > # privileged_without_host_devices = false
	I0920 18:32:52.665811  274391 command_runner.go:130] > # allowed_annotations = []
	I0920 18:32:52.665823  274391 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0920 18:32:52.665831  274391 command_runner.go:130] > # Where:
	I0920 18:32:52.665840  274391 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0920 18:32:52.665853  274391 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0920 18:32:52.665866  274391 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0920 18:32:52.665878  274391 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0920 18:32:52.665887  274391 command_runner.go:130] > #   in $PATH.
	I0920 18:32:52.665899  274391 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0920 18:32:52.665929  274391 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0920 18:32:52.665940  274391 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0920 18:32:52.665949  274391 command_runner.go:130] > #   state.
	I0920 18:32:52.665960  274391 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0920 18:32:52.665972  274391 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0920 18:32:52.665985  274391 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0920 18:32:52.665996  274391 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0920 18:32:52.666002  274391 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0920 18:32:52.666014  274391 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0920 18:32:52.666025  274391 command_runner.go:130] > #   The currently recognized values are:
	I0920 18:32:52.666044  274391 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0920 18:32:52.666058  274391 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0920 18:32:52.666070  274391 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0920 18:32:52.666082  274391 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0920 18:32:52.666095  274391 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0920 18:32:52.666107  274391 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0920 18:32:52.666122  274391 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0920 18:32:52.666133  274391 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0920 18:32:52.666146  274391 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0920 18:32:52.666158  274391 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0920 18:32:52.666168  274391 command_runner.go:130] > #   deprecated option "conmon".
	I0920 18:32:52.666180  274391 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0920 18:32:52.666191  274391 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0920 18:32:52.666205  274391 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0920 18:32:52.666216  274391 command_runner.go:130] > #   should be moved to the container's cgroup
	I0920 18:32:52.666229  274391 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0920 18:32:52.666238  274391 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0920 18:32:52.666246  274391 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0920 18:32:52.666258  274391 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0920 18:32:52.666266  274391 command_runner.go:130] > #
	I0920 18:32:52.666274  274391 command_runner.go:130] > # Using the seccomp notifier feature:
	I0920 18:32:52.666283  274391 command_runner.go:130] > #
	I0920 18:32:52.666296  274391 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0920 18:32:52.666311  274391 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0920 18:32:52.666319  274391 command_runner.go:130] > #
	I0920 18:32:52.666333  274391 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0920 18:32:52.666342  274391 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0920 18:32:52.666349  274391 command_runner.go:130] > #
	I0920 18:32:52.666362  274391 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0920 18:32:52.666372  274391 command_runner.go:130] > # feature.
	I0920 18:32:52.666381  274391 command_runner.go:130] > #
	I0920 18:32:52.666395  274391 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0920 18:32:52.666407  274391 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0920 18:32:52.666420  274391 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0920 18:32:52.666429  274391 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0920 18:32:52.666441  274391 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0920 18:32:52.666450  274391 command_runner.go:130] > #
	I0920 18:32:52.666461  274391 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0920 18:32:52.666474  274391 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0920 18:32:52.666482  274391 command_runner.go:130] > #
	I0920 18:32:52.666494  274391 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0920 18:32:52.666506  274391 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0920 18:32:52.666514  274391 command_runner.go:130] > #
	I0920 18:32:52.666523  274391 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0920 18:32:52.666534  274391 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0920 18:32:52.666544  274391 command_runner.go:130] > # limitation.
	I0920 18:32:52.666551  274391 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0920 18:32:52.666562  274391 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0920 18:32:52.666571  274391 command_runner.go:130] > runtime_type = "oci"
	I0920 18:32:52.666580  274391 command_runner.go:130] > runtime_root = "/run/runc"
	I0920 18:32:52.666590  274391 command_runner.go:130] > runtime_config_path = ""
	I0920 18:32:52.666600  274391 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0920 18:32:52.666609  274391 command_runner.go:130] > monitor_cgroup = "pod"
	I0920 18:32:52.666620  274391 command_runner.go:130] > monitor_exec_cgroup = ""
	I0920 18:32:52.666629  274391 command_runner.go:130] > monitor_env = [
	I0920 18:32:52.666641  274391 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 18:32:52.666651  274391 command_runner.go:130] > ]
	I0920 18:32:52.666660  274391 command_runner.go:130] > privileged_without_host_devices = false
	I0920 18:32:52.666673  274391 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0920 18:32:52.666685  274391 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0920 18:32:52.666698  274391 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0920 18:32:52.666713  274391 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0920 18:32:52.666725  274391 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0920 18:32:52.666736  274391 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0920 18:32:52.666755  274391 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0920 18:32:52.666771  274391 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0920 18:32:52.666783  274391 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0920 18:32:52.666798  274391 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0920 18:32:52.666806  274391 command_runner.go:130] > # Example:
	I0920 18:32:52.666813  274391 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0920 18:32:52.666820  274391 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0920 18:32:52.666828  274391 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0920 18:32:52.666839  274391 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0920 18:32:52.666846  274391 command_runner.go:130] > # cpuset = 0
	I0920 18:32:52.666853  274391 command_runner.go:130] > # cpushares = "0-1"
	I0920 18:32:52.666861  274391 command_runner.go:130] > # Where:
	I0920 18:32:52.666868  274391 command_runner.go:130] > # The workload name is workload-type.
	I0920 18:32:52.666882  274391 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0920 18:32:52.666893  274391 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0920 18:32:52.666903  274391 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0920 18:32:52.666914  274391 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0920 18:32:52.666927  274391 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0920 18:32:52.666939  274391 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0920 18:32:52.666953  274391 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0920 18:32:52.666962  274391 command_runner.go:130] > # Default value is set to true
	I0920 18:32:52.666973  274391 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0920 18:32:52.666985  274391 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0920 18:32:52.666992  274391 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0920 18:32:52.666999  274391 command_runner.go:130] > # Default value is set to 'false'
	I0920 18:32:52.667011  274391 command_runner.go:130] > # disable_hostport_mapping = false
	I0920 18:32:52.667025  274391 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0920 18:32:52.667034  274391 command_runner.go:130] > #
	I0920 18:32:52.667046  274391 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0920 18:32:52.667058  274391 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0920 18:32:52.667068  274391 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0920 18:32:52.667075  274391 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0920 18:32:52.667082  274391 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0920 18:32:52.667088  274391 command_runner.go:130] > [crio.image]
	I0920 18:32:52.667098  274391 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0920 18:32:52.667105  274391 command_runner.go:130] > # default_transport = "docker://"
	I0920 18:32:52.667115  274391 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0920 18:32:52.667126  274391 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0920 18:32:52.667135  274391 command_runner.go:130] > # global_auth_file = ""
	I0920 18:32:52.667143  274391 command_runner.go:130] > # The image used to instantiate infra containers.
	I0920 18:32:52.667150  274391 command_runner.go:130] > # This option supports live configuration reload.
	I0920 18:32:52.667157  274391 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0920 18:32:52.667169  274391 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0920 18:32:52.667181  274391 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0920 18:32:52.667192  274391 command_runner.go:130] > # This option supports live configuration reload.
	I0920 18:32:52.667204  274391 command_runner.go:130] > # pause_image_auth_file = ""
	I0920 18:32:52.667244  274391 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0920 18:32:52.667263  274391 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0920 18:32:52.667277  274391 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0920 18:32:52.667286  274391 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0920 18:32:52.667296  274391 command_runner.go:130] > # pause_command = "/pause"
	I0920 18:32:52.667306  274391 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0920 18:32:52.667319  274391 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0920 18:32:52.667331  274391 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0920 18:32:52.667344  274391 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0920 18:32:52.667356  274391 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0920 18:32:52.667369  274391 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0920 18:32:52.667380  274391 command_runner.go:130] > # pinned_images = [
	I0920 18:32:52.667395  274391 command_runner.go:130] > # ]
	I0920 18:32:52.667405  274391 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0920 18:32:52.667415  274391 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0920 18:32:52.667426  274391 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0920 18:32:52.667439  274391 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0920 18:32:52.667449  274391 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0920 18:32:52.667459  274391 command_runner.go:130] > # signature_policy = ""
	I0920 18:32:52.667467  274391 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0920 18:32:52.667481  274391 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0920 18:32:52.667495  274391 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0920 18:32:52.667506  274391 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0920 18:32:52.667512  274391 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0920 18:32:52.667520  274391 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0920 18:32:52.667534  274391 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0920 18:32:52.667548  274391 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0920 18:32:52.667558  274391 command_runner.go:130] > # changing them here.
	I0920 18:32:52.667565  274391 command_runner.go:130] > # insecure_registries = [
	I0920 18:32:52.667572  274391 command_runner.go:130] > # ]
	I0920 18:32:52.667583  274391 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0920 18:32:52.667593  274391 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0920 18:32:52.667598  274391 command_runner.go:130] > # image_volumes = "mkdir"
	I0920 18:32:52.667605  274391 command_runner.go:130] > # Temporary directory to use for storing big files
	I0920 18:32:52.667621  274391 command_runner.go:130] > # big_files_temporary_dir = ""
	I0920 18:32:52.667634  274391 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0920 18:32:52.667642  274391 command_runner.go:130] > # CNI plugins.
	I0920 18:32:52.667648  274391 command_runner.go:130] > [crio.network]
	I0920 18:32:52.667661  274391 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0920 18:32:52.667669  274391 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0920 18:32:52.667678  274391 command_runner.go:130] > # cni_default_network = ""
	I0920 18:32:52.667684  274391 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0920 18:32:52.667693  274391 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0920 18:32:52.667702  274391 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0920 18:32:52.667712  274391 command_runner.go:130] > # plugin_dirs = [
	I0920 18:32:52.667729  274391 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0920 18:32:52.667738  274391 command_runner.go:130] > # ]
	I0920 18:32:52.667747  274391 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0920 18:32:52.667753  274391 command_runner.go:130] > [crio.metrics]
	I0920 18:32:52.667762  274391 command_runner.go:130] > # Globally enable or disable metrics support.
	I0920 18:32:52.667767  274391 command_runner.go:130] > enable_metrics = true
	I0920 18:32:52.667772  274391 command_runner.go:130] > # Specify enabled metrics collectors.
	I0920 18:32:52.667782  274391 command_runner.go:130] > # Per default all metrics are enabled.
	I0920 18:32:52.667796  274391 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0920 18:32:52.667810  274391 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0920 18:32:52.667822  274391 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0920 18:32:52.667832  274391 command_runner.go:130] > # metrics_collectors = [
	I0920 18:32:52.667838  274391 command_runner.go:130] > # 	"operations",
	I0920 18:32:52.667848  274391 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0920 18:32:52.667853  274391 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0920 18:32:52.667856  274391 command_runner.go:130] > # 	"operations_errors",
	I0920 18:32:52.667863  274391 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0920 18:32:52.667871  274391 command_runner.go:130] > # 	"image_pulls_by_name",
	I0920 18:32:52.667879  274391 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0920 18:32:52.667889  274391 command_runner.go:130] > # 	"image_pulls_failures",
	I0920 18:32:52.667895  274391 command_runner.go:130] > # 	"image_pulls_successes",
	I0920 18:32:52.667905  274391 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0920 18:32:52.667911  274391 command_runner.go:130] > # 	"image_layer_reuse",
	I0920 18:32:52.667922  274391 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0920 18:32:52.667932  274391 command_runner.go:130] > # 	"containers_oom_total",
	I0920 18:32:52.667939  274391 command_runner.go:130] > # 	"containers_oom",
	I0920 18:32:52.667943  274391 command_runner.go:130] > # 	"processes_defunct",
	I0920 18:32:52.667948  274391 command_runner.go:130] > # 	"operations_total",
	I0920 18:32:52.667957  274391 command_runner.go:130] > # 	"operations_latency_seconds",
	I0920 18:32:52.667967  274391 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0920 18:32:52.667976  274391 command_runner.go:130] > # 	"operations_errors_total",
	I0920 18:32:52.667984  274391 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0920 18:32:52.667995  274391 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0920 18:32:52.668006  274391 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0920 18:32:52.668016  274391 command_runner.go:130] > # 	"image_pulls_success_total",
	I0920 18:32:52.668023  274391 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0920 18:32:52.668028  274391 command_runner.go:130] > # 	"containers_oom_count_total",
	I0920 18:32:52.668034  274391 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0920 18:32:52.668038  274391 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0920 18:32:52.668041  274391 command_runner.go:130] > # ]
	I0920 18:32:52.668048  274391 command_runner.go:130] > # The port on which the metrics server will listen.
	I0920 18:32:52.668057  274391 command_runner.go:130] > # metrics_port = 9090
	I0920 18:32:52.668066  274391 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0920 18:32:52.668075  274391 command_runner.go:130] > # metrics_socket = ""
	I0920 18:32:52.668084  274391 command_runner.go:130] > # The certificate for the secure metrics server.
	I0920 18:32:52.668096  274391 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0920 18:32:52.668109  274391 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0920 18:32:52.668119  274391 command_runner.go:130] > # certificate on any modification event.
	I0920 18:32:52.668127  274391 command_runner.go:130] > # metrics_cert = ""
	I0920 18:32:52.668133  274391 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0920 18:32:52.668140  274391 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0920 18:32:52.668144  274391 command_runner.go:130] > # metrics_key = ""
	I0920 18:32:52.668150  274391 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0920 18:32:52.668158  274391 command_runner.go:130] > [crio.tracing]
	I0920 18:32:52.668163  274391 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0920 18:32:52.668169  274391 command_runner.go:130] > # enable_tracing = false
	I0920 18:32:52.668174  274391 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0920 18:32:52.668181  274391 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0920 18:32:52.668187  274391 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0920 18:32:52.668193  274391 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0920 18:32:52.668199  274391 command_runner.go:130] > # CRI-O NRI configuration.
	I0920 18:32:52.668207  274391 command_runner.go:130] > [crio.nri]
	I0920 18:32:52.668215  274391 command_runner.go:130] > # Globally enable or disable NRI.
	I0920 18:32:52.668224  274391 command_runner.go:130] > # enable_nri = false
	I0920 18:32:52.668232  274391 command_runner.go:130] > # NRI socket to listen on.
	I0920 18:32:52.668241  274391 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0920 18:32:52.668250  274391 command_runner.go:130] > # NRI plugin directory to use.
	I0920 18:32:52.668260  274391 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0920 18:32:52.668268  274391 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0920 18:32:52.668276  274391 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0920 18:32:52.668282  274391 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0920 18:32:52.668286  274391 command_runner.go:130] > # nri_disable_connections = false
	I0920 18:32:52.668291  274391 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0920 18:32:52.668298  274391 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0920 18:32:52.668303  274391 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0920 18:32:52.668308  274391 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0920 18:32:52.668315  274391 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0920 18:32:52.668321  274391 command_runner.go:130] > [crio.stats]
	I0920 18:32:52.668327  274391 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0920 18:32:52.668335  274391 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0920 18:32:52.668339  274391 command_runner.go:130] > # stats_collection_period = 0
	I0920 18:32:52.668362  274391 command_runner.go:130] ! time="2024-09-20 18:32:52.628065936Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0920 18:32:52.668376  274391 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0920 18:32:52.668457  274391 cni.go:84] Creating CNI manager for ""
	I0920 18:32:52.668468  274391 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 18:32:52.668477  274391 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:32:52.668499  274391 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-029872 NodeName:multinode-029872 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:32:52.668633  274391 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-029872"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:32:52.668697  274391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:32:52.678951  274391 command_runner.go:130] > kubeadm
	I0920 18:32:52.678982  274391 command_runner.go:130] > kubectl
	I0920 18:32:52.678988  274391 command_runner.go:130] > kubelet
	I0920 18:32:52.679011  274391 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:32:52.679069  274391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:32:52.689631  274391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 18:32:52.706305  274391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:32:52.723649  274391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0920 18:32:52.740958  274391 ssh_runner.go:195] Run: grep 192.168.39.208	control-plane.minikube.internal$ /etc/hosts
	I0920 18:32:52.745110  274391 command_runner.go:130] > 192.168.39.208	control-plane.minikube.internal
	I0920 18:32:52.745203  274391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:32:52.887598  274391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:32:52.902213  274391 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872 for IP: 192.168.39.208
	I0920 18:32:52.902245  274391 certs.go:194] generating shared ca certs ...
	I0920 18:32:52.902270  274391 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:32:52.902468  274391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 18:32:52.902529  274391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 18:32:52.902561  274391 certs.go:256] generating profile certs ...
	I0920 18:32:52.902682  274391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/client.key
	I0920 18:32:52.902750  274391 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/apiserver.key.b4211b30
	I0920 18:32:52.902784  274391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/proxy-client.key
	I0920 18:32:52.902796  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:32:52.902813  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:32:52.902831  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:32:52.902849  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:32:52.902866  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:32:52.902885  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:32:52.902902  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:32:52.902919  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:32:52.902971  274391 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 18:32:52.903000  274391 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 18:32:52.903009  274391 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:32:52.903032  274391 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:32:52.903055  274391 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:32:52.903075  274391 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 18:32:52.903112  274391 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:32:52.903139  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem -> /usr/share/ca-certificates/244849.pem
	I0920 18:32:52.903152  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /usr/share/ca-certificates/2448492.pem
	I0920 18:32:52.903167  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:32:52.903815  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:32:52.928339  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:32:52.953584  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:32:52.978649  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:32:53.002279  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 18:32:53.026816  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:32:53.050562  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:32:53.075369  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:32:53.099457  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 18:32:53.123057  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 18:32:53.146946  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:32:53.172837  274391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:32:53.190071  274391 ssh_runner.go:195] Run: openssl version
	I0920 18:32:53.196010  274391 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0920 18:32:53.196109  274391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 18:32:53.207586  274391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 18:32:53.212166  274391 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:32:53.212212  274391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:32:53.212260  274391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 18:32:53.217911  274391 command_runner.go:130] > 51391683
	I0920 18:32:53.217996  274391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 18:32:53.227663  274391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 18:32:53.238887  274391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 18:32:53.243872  274391 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:32:53.243920  274391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:32:53.243965  274391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 18:32:53.249728  274391 command_runner.go:130] > 3ec20f2e
	I0920 18:32:53.249829  274391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:32:53.259760  274391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:32:53.270759  274391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:32:53.275236  274391 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:32:53.275281  274391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:32:53.275336  274391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:32:53.281233  274391 command_runner.go:130] > b5213941
	I0920 18:32:53.281328  274391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:32:53.291514  274391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:32:53.296510  274391 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:32:53.296548  274391 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0920 18:32:53.296557  274391 command_runner.go:130] > Device: 253,1	Inode: 531240      Links: 1
	I0920 18:32:53.296567  274391 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 18:32:53.296577  274391 command_runner.go:130] > Access: 2024-09-20 18:26:03.723344368 +0000
	I0920 18:32:53.296586  274391 command_runner.go:130] > Modify: 2024-09-20 18:26:03.723344368 +0000
	I0920 18:32:53.296594  274391 command_runner.go:130] > Change: 2024-09-20 18:26:03.723344368 +0000
	I0920 18:32:53.296599  274391 command_runner.go:130] >  Birth: 2024-09-20 18:26:03.723344368 +0000
	I0920 18:32:53.296677  274391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:32:53.303122  274391 command_runner.go:130] > Certificate will not expire
	I0920 18:32:53.303197  274391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:32:53.309196  274391 command_runner.go:130] > Certificate will not expire
	I0920 18:32:53.309296  274391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:32:53.315161  274391 command_runner.go:130] > Certificate will not expire
	I0920 18:32:53.315260  274391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:32:53.321259  274391 command_runner.go:130] > Certificate will not expire
	I0920 18:32:53.321431  274391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:32:53.327389  274391 command_runner.go:130] > Certificate will not expire
	I0920 18:32:53.327562  274391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:32:53.333226  274391 command_runner.go:130] > Certificate will not expire
	I0920 18:32:53.333378  274391 kubeadm.go:392] StartCluster: {Name:multinode-029872 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-029872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:
false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:32:53.333521  274391 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:32:53.333573  274391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:32:53.376062  274391 command_runner.go:130] > a3fb0bb7a4b9fc35b775e931ef2a1bbfbec34e679ca6b6d8fb7c78ac59be2289
	I0920 18:32:53.376094  274391 command_runner.go:130] > 3897ce884a96e9c37e2ae2f7bed67590af6954a8e687b9ba88a01d22e29a3246
	I0920 18:32:53.376103  274391 command_runner.go:130] > b35d8551f8a920c627d5b0364473c8d5bc743cf417dce9937c0769445051c907
	I0920 18:32:53.376119  274391 command_runner.go:130] > 2870a2ffc4d8472bf57d2be157ab5bee83a8ad11be43f8950b4964e68b24dc82
	I0920 18:32:53.376126  274391 command_runner.go:130] > 073a54c38674e8692d0308e45f3e05cc9179f3d5b85e00aa5d42bd126e0196ed
	I0920 18:32:53.376134  274391 command_runner.go:130] > 1804be53eac55fb70afccfdc2b30a9f18eb4952522eb900016095c20d5772daa
	I0920 18:32:53.376141  274391 command_runner.go:130] > c529d2d1a67659a8727534f484783b19b3e28f5fdea0dc634075c23cb408373a
	I0920 18:32:53.376150  274391 command_runner.go:130] > 705d3dd400a4ba3de60e31139e64e3ed80bcb3d04c7dac4ca65633badd43a643
	I0920 18:32:53.376178  274391 cri.go:89] found id: "a3fb0bb7a4b9fc35b775e931ef2a1bbfbec34e679ca6b6d8fb7c78ac59be2289"
	I0920 18:32:53.376189  274391 cri.go:89] found id: "3897ce884a96e9c37e2ae2f7bed67590af6954a8e687b9ba88a01d22e29a3246"
	I0920 18:32:53.376193  274391 cri.go:89] found id: "b35d8551f8a920c627d5b0364473c8d5bc743cf417dce9937c0769445051c907"
	I0920 18:32:53.376199  274391 cri.go:89] found id: "2870a2ffc4d8472bf57d2be157ab5bee83a8ad11be43f8950b4964e68b24dc82"
	I0920 18:32:53.376207  274391 cri.go:89] found id: "073a54c38674e8692d0308e45f3e05cc9179f3d5b85e00aa5d42bd126e0196ed"
	I0920 18:32:53.376214  274391 cri.go:89] found id: "1804be53eac55fb70afccfdc2b30a9f18eb4952522eb900016095c20d5772daa"
	I0920 18:32:53.376219  274391 cri.go:89] found id: "c529d2d1a67659a8727534f484783b19b3e28f5fdea0dc634075c23cb408373a"
	I0920 18:32:53.376224  274391 cri.go:89] found id: "705d3dd400a4ba3de60e31139e64e3ed80bcb3d04c7dac4ca65633badd43a643"
	I0920 18:32:53.376228  274391 cri.go:89] found id: ""
	I0920 18:32:53.376333  274391 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.392207938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857277392181763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ba1f22c-b96f-4d70-a1f5-5828bf2191c8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.392875987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4108933b-0719-44b3-905a-009214217aa9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.392946772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4108933b-0719-44b3-905a-009214217aa9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.393302630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1325992f968925a7e0bd8ca36eb91f0be08eda48728e64ed2323ec77bf2f5b0,PodSandboxId:6ca8890cc99c3b272bf592253c07377789135a7a6de89eae7beb4f99f7572118,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726857212468347801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1eeb0e023a3b826ce235ee9394ed39e834b1da0bffac4611d023fe0bd4d655,PodSandboxId:234ae40772716825c09687a0cf35c83a487bd28a84d425ee7262c589f92026f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857178727556714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec4fb8debfebe359a87d116ad45d4dc578db044a0e0d812c62a956fe01ffbb1c,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857178517786604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90f07dfbad75d09ccff8e3c009a0061d0cf92a7d37b500882f4c886b9021a7e,PodSandboxId:6243608fd21a5a40bae600a7f50b44c7ba0ff09300ba771b16cf662acd47d9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857175917382785,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6ed28af14ff16bd8c9770970f355fc2cc818a8f475c5888fc4fdb56c129cb7,PodSandboxId:7790f09b328412d22d5f040b38f6d86121d85423c5db6f64d97f9fa1082a9ae4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857175888286163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da
0460aa0e8574bc96c14fafa16f14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9317bb7bef08ea5c7f685012c221f64907dff56faa882dace075d4e2103bc2d,PodSandboxId:ca278c734be154c51b62847db506962be86313eb947f2a6a3622ef9cd52a0e48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857175866848551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18d
c98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e48e9537db159e4367676ef9f93e6029c5303f946601f6dd01412cc1d4d35a0f,PodSandboxId:c8c9f8c7d4b6e06676c4e73b0b9bcd256ead338891b81cc85ff430e9adde0d70,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726857173916456917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d911e8c81f08e255bbb2b2f2e819d0947975821667db477cfaa24cdad47a8b,PodSandboxId:f56e0d8961714499c0bf769aad4040be2692b57207310a0f2caed28f84fb3dcb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857173762077190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd30-829846b13661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aabb2691f20becaa4c5ca1e67cb43dae0f6e44f71b6f5b06ff5407e565c8f6,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_CREATED,CreatedAt:1726857173824481004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernete
s.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d973e006738759e76ecbf904a95ae2da1a8d48bc4493938303f4bb468347faa,PodSandboxId:91c2c951af446cc6dda00b7c57a130a5c59242b86af430fce1d3ffb89740c84a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857173662917592,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-02987
2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d18b934b362ca3e58d24504c3767c80803b43675fff12590d1d101aadcc6f72,PodSandboxId:c2899b18f212e4f8b6143d021264fb20d3ff5911135ecbc13b258c0088a3b1d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726856850078067054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,
io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3897ce884a96e9c37e2ae2f7bed67590af6954a8e687b9ba88a01d22e29a3246,PodSandboxId:5330b65b85bb0874ab88295defaec3a4a81adb99453b8e972d310a90ec5010ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856791102386218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b35d8551f8a920c627d5b0364473c8d5bc743cf417dce9937c0769445051c907,PodSandboxId:4a27ce4bddd70fb49ec3a10e1ea41fd6a7bbbe161b7bb308f44d8d56ea71e7e6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726856779334068679,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2870a2ffc4d8472bf57d2be157ab5bee83a8ad11be43f8950b4964e68b24dc82,PodSandboxId:cae525d4892a631162052f00172d6fd1fa1c0432a1fdb190dd825d319eac6b92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726856779204146465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd3
0-829846b13661,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073a54c38674e8692d0308e45f3e05cc9179f3d5b85e00aa5d42bd126e0196ed,PodSandboxId:8439213e80207e430fcf31af3ff01c520bc776a8a996e5354212bc4135303289,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726856766988752247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da0460aa0e8574bc96c14fafa16f14,
},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705d3dd400a4ba3de60e31139e64e3ed80bcb3d04c7dac4ca65633badd43a643,PodSandboxId:6432f066bd8e9d8a86557cce08ccb49a49bc98e74cd4ad0c3de750ccecdf00f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726856766881282847,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes
.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1804be53eac55fb70afccfdc2b30a9f18eb4952522eb900016095c20d5772daa,PodSandboxId:1e755d432d60b11649a2bcee899436c37ccdde20c2dac5525d003db4782bbbc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856766906045930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18dc98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash
: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c529d2d1a67659a8727534f484783b19b3e28f5fdea0dc634075c23cb408373a,PodSandboxId:dacb801d8f81c45556bf021618f4f58fa009a057aa261a132ea23089126b13b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726856766886909919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4108933b-0719-44b3-905a-009214217aa9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.439005098Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=343df3ff-eb28-4778-b410-6cbf6f1f1104 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.439100522Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=343df3ff-eb28-4778-b410-6cbf6f1f1104 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.440183641Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d105ca4-77b2-4b7a-8308-def567c91562 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.440762157Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857277440727433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d105ca4-77b2-4b7a-8308-def567c91562 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.441252321Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6684443f-af60-43cf-b887-5d0c366b7b76 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.441325226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6684443f-af60-43cf-b887-5d0c366b7b76 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.441737797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1325992f968925a7e0bd8ca36eb91f0be08eda48728e64ed2323ec77bf2f5b0,PodSandboxId:6ca8890cc99c3b272bf592253c07377789135a7a6de89eae7beb4f99f7572118,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726857212468347801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1eeb0e023a3b826ce235ee9394ed39e834b1da0bffac4611d023fe0bd4d655,PodSandboxId:234ae40772716825c09687a0cf35c83a487bd28a84d425ee7262c589f92026f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857178727556714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec4fb8debfebe359a87d116ad45d4dc578db044a0e0d812c62a956fe01ffbb1c,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857178517786604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90f07dfbad75d09ccff8e3c009a0061d0cf92a7d37b500882f4c886b9021a7e,PodSandboxId:6243608fd21a5a40bae600a7f50b44c7ba0ff09300ba771b16cf662acd47d9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857175917382785,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6ed28af14ff16bd8c9770970f355fc2cc818a8f475c5888fc4fdb56c129cb7,PodSandboxId:7790f09b328412d22d5f040b38f6d86121d85423c5db6f64d97f9fa1082a9ae4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857175888286163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da
0460aa0e8574bc96c14fafa16f14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9317bb7bef08ea5c7f685012c221f64907dff56faa882dace075d4e2103bc2d,PodSandboxId:ca278c734be154c51b62847db506962be86313eb947f2a6a3622ef9cd52a0e48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857175866848551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18d
c98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e48e9537db159e4367676ef9f93e6029c5303f946601f6dd01412cc1d4d35a0f,PodSandboxId:c8c9f8c7d4b6e06676c4e73b0b9bcd256ead338891b81cc85ff430e9adde0d70,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726857173916456917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d911e8c81f08e255bbb2b2f2e819d0947975821667db477cfaa24cdad47a8b,PodSandboxId:f56e0d8961714499c0bf769aad4040be2692b57207310a0f2caed28f84fb3dcb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857173762077190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd30-829846b13661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aabb2691f20becaa4c5ca1e67cb43dae0f6e44f71b6f5b06ff5407e565c8f6,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_CREATED,CreatedAt:1726857173824481004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernete
s.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d973e006738759e76ecbf904a95ae2da1a8d48bc4493938303f4bb468347faa,PodSandboxId:91c2c951af446cc6dda00b7c57a130a5c59242b86af430fce1d3ffb89740c84a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857173662917592,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-02987
2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d18b934b362ca3e58d24504c3767c80803b43675fff12590d1d101aadcc6f72,PodSandboxId:c2899b18f212e4f8b6143d021264fb20d3ff5911135ecbc13b258c0088a3b1d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726856850078067054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,
io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3897ce884a96e9c37e2ae2f7bed67590af6954a8e687b9ba88a01d22e29a3246,PodSandboxId:5330b65b85bb0874ab88295defaec3a4a81adb99453b8e972d310a90ec5010ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856791102386218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b35d8551f8a920c627d5b0364473c8d5bc743cf417dce9937c0769445051c907,PodSandboxId:4a27ce4bddd70fb49ec3a10e1ea41fd6a7bbbe161b7bb308f44d8d56ea71e7e6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726856779334068679,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2870a2ffc4d8472bf57d2be157ab5bee83a8ad11be43f8950b4964e68b24dc82,PodSandboxId:cae525d4892a631162052f00172d6fd1fa1c0432a1fdb190dd825d319eac6b92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726856779204146465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd3
0-829846b13661,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073a54c38674e8692d0308e45f3e05cc9179f3d5b85e00aa5d42bd126e0196ed,PodSandboxId:8439213e80207e430fcf31af3ff01c520bc776a8a996e5354212bc4135303289,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726856766988752247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da0460aa0e8574bc96c14fafa16f14,
},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705d3dd400a4ba3de60e31139e64e3ed80bcb3d04c7dac4ca65633badd43a643,PodSandboxId:6432f066bd8e9d8a86557cce08ccb49a49bc98e74cd4ad0c3de750ccecdf00f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726856766881282847,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes
.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1804be53eac55fb70afccfdc2b30a9f18eb4952522eb900016095c20d5772daa,PodSandboxId:1e755d432d60b11649a2bcee899436c37ccdde20c2dac5525d003db4782bbbc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856766906045930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18dc98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash
: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c529d2d1a67659a8727534f484783b19b3e28f5fdea0dc634075c23cb408373a,PodSandboxId:dacb801d8f81c45556bf021618f4f58fa009a057aa261a132ea23089126b13b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726856766886909919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6684443f-af60-43cf-b887-5d0c366b7b76 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.486103632Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddbc3d8a-6387-4434-a9dd-70abd9bc806f name=/runtime.v1.RuntimeService/Version
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.486175189Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddbc3d8a-6387-4434-a9dd-70abd9bc806f name=/runtime.v1.RuntimeService/Version
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.487306225Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1062997e-cd4a-435e-be5e-b562d2078fc4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.487884229Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857277487859340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1062997e-cd4a-435e-be5e-b562d2078fc4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.488377223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c584f85-a980-4535-9d14-5c26adb3ee21 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.488439022Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c584f85-a980-4535-9d14-5c26adb3ee21 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.488829173Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1325992f968925a7e0bd8ca36eb91f0be08eda48728e64ed2323ec77bf2f5b0,PodSandboxId:6ca8890cc99c3b272bf592253c07377789135a7a6de89eae7beb4f99f7572118,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726857212468347801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1eeb0e023a3b826ce235ee9394ed39e834b1da0bffac4611d023fe0bd4d655,PodSandboxId:234ae40772716825c09687a0cf35c83a487bd28a84d425ee7262c589f92026f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857178727556714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec4fb8debfebe359a87d116ad45d4dc578db044a0e0d812c62a956fe01ffbb1c,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857178517786604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90f07dfbad75d09ccff8e3c009a0061d0cf92a7d37b500882f4c886b9021a7e,PodSandboxId:6243608fd21a5a40bae600a7f50b44c7ba0ff09300ba771b16cf662acd47d9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857175917382785,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6ed28af14ff16bd8c9770970f355fc2cc818a8f475c5888fc4fdb56c129cb7,PodSandboxId:7790f09b328412d22d5f040b38f6d86121d85423c5db6f64d97f9fa1082a9ae4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857175888286163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da
0460aa0e8574bc96c14fafa16f14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9317bb7bef08ea5c7f685012c221f64907dff56faa882dace075d4e2103bc2d,PodSandboxId:ca278c734be154c51b62847db506962be86313eb947f2a6a3622ef9cd52a0e48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857175866848551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18d
c98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e48e9537db159e4367676ef9f93e6029c5303f946601f6dd01412cc1d4d35a0f,PodSandboxId:c8c9f8c7d4b6e06676c4e73b0b9bcd256ead338891b81cc85ff430e9adde0d70,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726857173916456917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d911e8c81f08e255bbb2b2f2e819d0947975821667db477cfaa24cdad47a8b,PodSandboxId:f56e0d8961714499c0bf769aad4040be2692b57207310a0f2caed28f84fb3dcb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857173762077190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd30-829846b13661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aabb2691f20becaa4c5ca1e67cb43dae0f6e44f71b6f5b06ff5407e565c8f6,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_CREATED,CreatedAt:1726857173824481004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernete
s.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d973e006738759e76ecbf904a95ae2da1a8d48bc4493938303f4bb468347faa,PodSandboxId:91c2c951af446cc6dda00b7c57a130a5c59242b86af430fce1d3ffb89740c84a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857173662917592,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-02987
2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d18b934b362ca3e58d24504c3767c80803b43675fff12590d1d101aadcc6f72,PodSandboxId:c2899b18f212e4f8b6143d021264fb20d3ff5911135ecbc13b258c0088a3b1d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726856850078067054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,
io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3897ce884a96e9c37e2ae2f7bed67590af6954a8e687b9ba88a01d22e29a3246,PodSandboxId:5330b65b85bb0874ab88295defaec3a4a81adb99453b8e972d310a90ec5010ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856791102386218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b35d8551f8a920c627d5b0364473c8d5bc743cf417dce9937c0769445051c907,PodSandboxId:4a27ce4bddd70fb49ec3a10e1ea41fd6a7bbbe161b7bb308f44d8d56ea71e7e6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726856779334068679,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2870a2ffc4d8472bf57d2be157ab5bee83a8ad11be43f8950b4964e68b24dc82,PodSandboxId:cae525d4892a631162052f00172d6fd1fa1c0432a1fdb190dd825d319eac6b92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726856779204146465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd3
0-829846b13661,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073a54c38674e8692d0308e45f3e05cc9179f3d5b85e00aa5d42bd126e0196ed,PodSandboxId:8439213e80207e430fcf31af3ff01c520bc776a8a996e5354212bc4135303289,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726856766988752247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da0460aa0e8574bc96c14fafa16f14,
},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705d3dd400a4ba3de60e31139e64e3ed80bcb3d04c7dac4ca65633badd43a643,PodSandboxId:6432f066bd8e9d8a86557cce08ccb49a49bc98e74cd4ad0c3de750ccecdf00f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726856766881282847,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes
.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1804be53eac55fb70afccfdc2b30a9f18eb4952522eb900016095c20d5772daa,PodSandboxId:1e755d432d60b11649a2bcee899436c37ccdde20c2dac5525d003db4782bbbc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856766906045930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18dc98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash
: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c529d2d1a67659a8727534f484783b19b3e28f5fdea0dc634075c23cb408373a,PodSandboxId:dacb801d8f81c45556bf021618f4f58fa009a057aa261a132ea23089126b13b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726856766886909919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c584f85-a980-4535-9d14-5c26adb3ee21 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.529309539Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48e925b4-2112-4faa-8c05-44cd0442b73f name=/runtime.v1.RuntimeService/Version
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.529386261Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48e925b4-2112-4faa-8c05-44cd0442b73f name=/runtime.v1.RuntimeService/Version
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.530297947Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0ee36c7-fe89-4726-969a-ba04ae0f2636 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.530858313Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857277530834077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0ee36c7-fe89-4726-969a-ba04ae0f2636 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.531342609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac5614e5-6e25-4dd8-b28d-9ef64b12a6f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.531414511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac5614e5-6e25-4dd8-b28d-9ef64b12a6f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:34:37 multinode-029872 crio[2704]: time="2024-09-20 18:34:37.531888202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1325992f968925a7e0bd8ca36eb91f0be08eda48728e64ed2323ec77bf2f5b0,PodSandboxId:6ca8890cc99c3b272bf592253c07377789135a7a6de89eae7beb4f99f7572118,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726857212468347801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1eeb0e023a3b826ce235ee9394ed39e834b1da0bffac4611d023fe0bd4d655,PodSandboxId:234ae40772716825c09687a0cf35c83a487bd28a84d425ee7262c589f92026f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857178727556714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec4fb8debfebe359a87d116ad45d4dc578db044a0e0d812c62a956fe01ffbb1c,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857178517786604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90f07dfbad75d09ccff8e3c009a0061d0cf92a7d37b500882f4c886b9021a7e,PodSandboxId:6243608fd21a5a40bae600a7f50b44c7ba0ff09300ba771b16cf662acd47d9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857175917382785,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6ed28af14ff16bd8c9770970f355fc2cc818a8f475c5888fc4fdb56c129cb7,PodSandboxId:7790f09b328412d22d5f040b38f6d86121d85423c5db6f64d97f9fa1082a9ae4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857175888286163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da
0460aa0e8574bc96c14fafa16f14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9317bb7bef08ea5c7f685012c221f64907dff56faa882dace075d4e2103bc2d,PodSandboxId:ca278c734be154c51b62847db506962be86313eb947f2a6a3622ef9cd52a0e48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857175866848551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18d
c98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e48e9537db159e4367676ef9f93e6029c5303f946601f6dd01412cc1d4d35a0f,PodSandboxId:c8c9f8c7d4b6e06676c4e73b0b9bcd256ead338891b81cc85ff430e9adde0d70,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726857173916456917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d911e8c81f08e255bbb2b2f2e819d0947975821667db477cfaa24cdad47a8b,PodSandboxId:f56e0d8961714499c0bf769aad4040be2692b57207310a0f2caed28f84fb3dcb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857173762077190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd30-829846b13661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aabb2691f20becaa4c5ca1e67cb43dae0f6e44f71b6f5b06ff5407e565c8f6,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_CREATED,CreatedAt:1726857173824481004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernete
s.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d973e006738759e76ecbf904a95ae2da1a8d48bc4493938303f4bb468347faa,PodSandboxId:91c2c951af446cc6dda00b7c57a130a5c59242b86af430fce1d3ffb89740c84a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857173662917592,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-02987
2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d18b934b362ca3e58d24504c3767c80803b43675fff12590d1d101aadcc6f72,PodSandboxId:c2899b18f212e4f8b6143d021264fb20d3ff5911135ecbc13b258c0088a3b1d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726856850078067054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,
io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3897ce884a96e9c37e2ae2f7bed67590af6954a8e687b9ba88a01d22e29a3246,PodSandboxId:5330b65b85bb0874ab88295defaec3a4a81adb99453b8e972d310a90ec5010ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856791102386218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b35d8551f8a920c627d5b0364473c8d5bc743cf417dce9937c0769445051c907,PodSandboxId:4a27ce4bddd70fb49ec3a10e1ea41fd6a7bbbe161b7bb308f44d8d56ea71e7e6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726856779334068679,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2870a2ffc4d8472bf57d2be157ab5bee83a8ad11be43f8950b4964e68b24dc82,PodSandboxId:cae525d4892a631162052f00172d6fd1fa1c0432a1fdb190dd825d319eac6b92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726856779204146465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd3
0-829846b13661,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073a54c38674e8692d0308e45f3e05cc9179f3d5b85e00aa5d42bd126e0196ed,PodSandboxId:8439213e80207e430fcf31af3ff01c520bc776a8a996e5354212bc4135303289,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726856766988752247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da0460aa0e8574bc96c14fafa16f14,
},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705d3dd400a4ba3de60e31139e64e3ed80bcb3d04c7dac4ca65633badd43a643,PodSandboxId:6432f066bd8e9d8a86557cce08ccb49a49bc98e74cd4ad0c3de750ccecdf00f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726856766881282847,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes
.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1804be53eac55fb70afccfdc2b30a9f18eb4952522eb900016095c20d5772daa,PodSandboxId:1e755d432d60b11649a2bcee899436c37ccdde20c2dac5525d003db4782bbbc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856766906045930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18dc98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash
: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c529d2d1a67659a8727534f484783b19b3e28f5fdea0dc634075c23cb408373a,PodSandboxId:dacb801d8f81c45556bf021618f4f58fa009a057aa261a132ea23089126b13b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726856766886909919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac5614e5-6e25-4dd8-b28d-9ef64b12a6f5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a1325992f9689       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   6ca8890cc99c3       busybox-7dff88458-8vvbm
	cc1eeb0e023a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   234ae40772716       storage-provisioner
	ec4fb8debfebe       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   2                   99c2381eaa0c1       coredns-7c65d6cfc9-mjk2z
	f90f07dfbad75       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   6243608fd21a5       kube-controller-manager-multinode-029872
	4e6ed28af14ff       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   7790f09b32841       kube-scheduler-multinode-029872
	d9317bb7bef08       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   ca278c734be15       kube-apiserver-multinode-029872
	e48e9537db159       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   c8c9f8c7d4b6e       kindnet-gmkqk
	b7aabb2691f20       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Created             coredns                   1                   99c2381eaa0c1       coredns-7c65d6cfc9-mjk2z
	63d911e8c81f0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   f56e0d8961714       kube-proxy-5spcx
	1d973e0067387       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   91c2c951af446       etcd-multinode-029872
	0d18b934b362c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   c2899b18f212e       busybox-7dff88458-8vvbm
	3897ce884a96e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   5330b65b85bb0       storage-provisioner
	b35d8551f8a92       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   4a27ce4bddd70       kindnet-gmkqk
	2870a2ffc4d84       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   cae525d4892a6       kube-proxy-5spcx
	073a54c38674e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   8439213e80207       kube-scheduler-multinode-029872
	1804be53eac55       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   1e755d432d60b       kube-apiserver-multinode-029872
	c529d2d1a6765       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   dacb801d8f81c       kube-controller-manager-multinode-029872
	705d3dd400a4b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   6432f066bd8e9       etcd-multinode-029872
	
	
	==> coredns [b7aabb2691f20becaa4c5ca1e67cb43dae0f6e44f71b6f5b06ff5407e565c8f6] <==
	
	
	==> coredns [ec4fb8debfebe359a87d116ad45d4dc578db044a0e0d812c62a956fe01ffbb1c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52393 - 61376 "HINFO IN 7918400279444024197.6702250931947430502. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014554351s
	
	
	==> describe nodes <==
	Name:               multinode-029872
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-029872
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=multinode-029872
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_26_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:26:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-029872
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:34:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:32:58 +0000   Fri, 20 Sep 2024 18:26:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:32:58 +0000   Fri, 20 Sep 2024 18:26:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:32:58 +0000   Fri, 20 Sep 2024 18:26:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:32:58 +0000   Fri, 20 Sep 2024 18:26:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.208
	  Hostname:    multinode-029872
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 43d7473b27e74347bacea3bc028d8640
	  System UUID:                43d7473b-27e7-4347-bace-a3bc028d8640
	  Boot ID:                    a5fe9348-cdf2-40c9-ae57-5402902cd3cd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8vvbm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 coredns-7c65d6cfc9-mjk2z                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m20s
	  kube-system                 etcd-multinode-029872                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m26s
	  kube-system                 kindnet-gmkqk                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m20s
	  kube-system                 kube-apiserver-multinode-029872             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-controller-manager-multinode-029872    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-proxy-5spcx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 kube-scheduler-multinode-029872             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m18s                  kube-proxy       
	  Normal  Starting                 98s                    kube-proxy       
	  Normal  Starting                 8m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m31s (x8 over 8m31s)  kubelet          Node multinode-029872 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m31s (x8 over 8m31s)  kubelet          Node multinode-029872 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m31s (x7 over 8m31s)  kubelet          Node multinode-029872 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m25s                  kubelet          Node multinode-029872 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m25s                  kubelet          Node multinode-029872 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m25s                  kubelet          Node multinode-029872 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m25s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m21s                  node-controller  Node multinode-029872 event: Registered Node multinode-029872 in Controller
	  Normal  NodeReady                8m7s                   kubelet          Node multinode-029872 status is now: NodeReady
	  Normal  Starting                 102s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)    kubelet          Node multinode-029872 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)    kubelet          Node multinode-029872 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x7 over 102s)    kubelet          Node multinode-029872 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           96s                    node-controller  Node multinode-029872 event: Registered Node multinode-029872 in Controller
	
	
	Name:               multinode-029872-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-029872-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=multinode-029872
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_33_36_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:33:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-029872-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:34:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:34:07 +0000   Fri, 20 Sep 2024 18:33:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:34:07 +0000   Fri, 20 Sep 2024 18:33:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:34:07 +0000   Fri, 20 Sep 2024 18:33:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:34:07 +0000   Fri, 20 Sep 2024 18:33:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    multinode-029872-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac160e0a0adc42e8936ae664dffb726f
	  System UUID:                ac160e0a-0adc-42e8-936a-e664dffb726f
	  Boot ID:                    b10aa6c0-f090-4f36-9b4a-bc9b2d3ead0f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cz7gz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kindnet-8spmr              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m34s
	  kube-system                 kube-proxy-kbppv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m29s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeAllocatableEnforced  7m35s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m34s (x2 over 7m35s)  kubelet     Node multinode-029872-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m34s (x2 over 7m35s)  kubelet     Node multinode-029872-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m34s (x2 over 7m35s)  kubelet     Node multinode-029872-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m13s                  kubelet     Node multinode-029872-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-029872-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-029872-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-029872-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-029872-m02 status is now: NodeReady
	
	
	Name:               multinode-029872-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-029872-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=multinode-029872
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_34_16_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:34:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-029872-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:34:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:34:34 +0000   Fri, 20 Sep 2024 18:34:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:34:34 +0000   Fri, 20 Sep 2024 18:34:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:34:34 +0000   Fri, 20 Sep 2024 18:34:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:34:34 +0000   Fri, 20 Sep 2024 18:34:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    multinode-029872-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e1eaf0d78f046669cb6f1cfb296718c
	  System UUID:                1e1eaf0d-78f0-4666-9cb6-f1cfb296718c
	  Boot ID:                    4f9823ad-6183-409f-bd80-a17208e933ce
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-wtscj       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m38s
	  kube-system                 kube-proxy-mjjpg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m33s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m42s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m39s (x2 over 6m39s)  kubelet     Node multinode-029872-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m39s (x2 over 6m39s)  kubelet     Node multinode-029872-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m39s (x2 over 6m39s)  kubelet     Node multinode-029872-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m39s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m18s                  kubelet     Node multinode-029872-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m47s (x2 over 5m47s)  kubelet     Node multinode-029872-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m47s (x2 over 5m47s)  kubelet     Node multinode-029872-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m47s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m47s (x2 over 5m47s)  kubelet     Node multinode-029872-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m47s                  kubelet     Starting kubelet.
	  Normal  NodeReady                5m28s                  kubelet     Node multinode-029872-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-029872-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-029872-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-029872-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-029872-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.065459] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.170159] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.143132] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.268833] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Sep20 18:26] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +3.925117] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.061639] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.999075] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.086942] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.611215] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +0.093715] kauditd_printk_skb: 21 callbacks suppressed
	[ +13.435528] kauditd_printk_skb: 60 callbacks suppressed
	[Sep20 18:27] kauditd_printk_skb: 14 callbacks suppressed
	[Sep20 18:32] systemd-fstab-generator[2627]: Ignoring "noauto" option for root device
	[  +0.151807] systemd-fstab-generator[2639]: Ignoring "noauto" option for root device
	[  +0.195751] systemd-fstab-generator[2653]: Ignoring "noauto" option for root device
	[  +0.158670] systemd-fstab-generator[2665]: Ignoring "noauto" option for root device
	[  +0.277223] systemd-fstab-generator[2694]: Ignoring "noauto" option for root device
	[  +5.206252] systemd-fstab-generator[2788]: Ignoring "noauto" option for root device
	[  +0.082771] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.116753] systemd-fstab-generator[3171]: Ignoring "noauto" option for root device
	[  +3.602813] kauditd_printk_skb: 106 callbacks suppressed
	[Sep20 18:33] systemd-fstab-generator[3792]: Ignoring "noauto" option for root device
	[  +0.093622] kauditd_printk_skb: 11 callbacks suppressed
	[ +20.475894] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [1d973e006738759e76ecbf904a95ae2da1a8d48bc4493938303f4bb468347faa] <==
	{"level":"info","ts":"2024-09-20T18:32:54.020824Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fb8a78b66dce1ac7","local-member-id":"7fe6bf77aaafe0f6","added-peer-id":"7fe6bf77aaafe0f6","added-peer-peer-urls":["https://192.168.39.208:2380"]}
	{"level":"info","ts":"2024-09-20T18:32:54.020936Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fb8a78b66dce1ac7","local-member-id":"7fe6bf77aaafe0f6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:32:54.020963Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:32:54.023781Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:32:54.029741Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T18:32:54.030248Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7fe6bf77aaafe0f6","initial-advertise-peer-urls":["https://192.168.39.208:2380"],"listen-peer-urls":["https://192.168.39.208:2380"],"advertise-client-urls":["https://192.168.39.208:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.208:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T18:32:54.030648Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T18:32:54.030790Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-09-20T18:32:54.030803Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-09-20T18:32:55.798662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T18:32:55.798764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T18:32:55.798821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 received MsgPreVoteResp from 7fe6bf77aaafe0f6 at term 2"}
	{"level":"info","ts":"2024-09-20T18:32:55.798861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T18:32:55.798889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 received MsgVoteResp from 7fe6bf77aaafe0f6 at term 3"}
	{"level":"info","ts":"2024-09-20T18:32:55.798924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T18:32:55.798951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7fe6bf77aaafe0f6 elected leader 7fe6bf77aaafe0f6 at term 3"}
	{"level":"info","ts":"2024-09-20T18:32:55.804087Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7fe6bf77aaafe0f6","local-member-attributes":"{Name:multinode-029872 ClientURLs:[https://192.168.39.208:2379]}","request-path":"/0/members/7fe6bf77aaafe0f6/attributes","cluster-id":"fb8a78b66dce1ac7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:32:55.804390Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:32:55.805491Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:32:55.810631Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.208:2379"}
	{"level":"info","ts":"2024-09-20T18:32:55.810953Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:32:55.814980Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:32:55.806742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:32:55.818508Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:32:55.821686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [705d3dd400a4ba3de60e31139e64e3ed80bcb3d04c7dac4ca65633badd43a643] <==
	{"level":"info","ts":"2024-09-20T18:26:08.027417Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:26:08.029402Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:26:08.029570Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:26:08.029616Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:26:08.030148Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.208:2379"}
	{"level":"info","ts":"2024-09-20T18:26:08.030164Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:26:08.031028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:26:08.030266Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fb8a78b66dce1ac7","local-member-id":"7fe6bf77aaafe0f6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:26:08.033687Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:26:08.033729Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	2024/09/20 18:26:11 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-20T18:27:02.991889Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.568547ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16210304509122378693 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-029872-m02.17f70721892cde7f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-029872-m02.17f70721892cde7f\" value_size:642 lease:6986932472267602220 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-20T18:27:02.991999Z","caller":"traceutil/trace.go:171","msg":"trace[249892085] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"236.006584ms","start":"2024-09-20T18:27:02.755976Z","end":"2024-09-20T18:27:02.991982Z","steps":["trace[249892085] 'process raft request'  (duration: 74.747174ms)","trace[249892085] 'compare'  (duration: 160.473952ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:27:58.849785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.23444ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16210304509122379201 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-029872-m03.17f7072e8bd98e9c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-029872-m03.17f7072e8bd98e9c\" value_size:646 lease:6986932472267603009 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-20T18:27:58.850194Z","caller":"traceutil/trace.go:171","msg":"trace[420030120] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"210.447159ms","start":"2024-09-20T18:27:58.639703Z","end":"2024-09-20T18:27:58.850151Z","steps":["trace[420030120] 'process raft request'  (duration: 79.729945ms)","trace[420030120] 'compare'  (duration: 130.124327ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:31:15.515354Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-20T18:31:15.515499Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-029872","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.208:2380"],"advertise-client-urls":["https://192.168.39.208:2379"]}
	{"level":"warn","ts":"2024-09-20T18:31:15.515688Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:31:15.545751Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:31:15.604626Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.208:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:31:15.604898Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.208:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T18:31:15.605035Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7fe6bf77aaafe0f6","current-leader-member-id":"7fe6bf77aaafe0f6"}
	{"level":"info","ts":"2024-09-20T18:31:15.608384Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-09-20T18:31:15.608631Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-09-20T18:31:15.608666Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-029872","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.208:2380"],"advertise-client-urls":["https://192.168.39.208:2379"]}
	
	
	==> kernel <==
	 18:34:38 up 9 min,  0 users,  load average: 0.21, 0.20, 0.10
	Linux multinode-029872 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b35d8551f8a920c627d5b0364473c8d5bc743cf417dce9937c0769445051c907] <==
	I0920 18:30:30.293380       1 main.go:322] Node multinode-029872-m03 has CIDR [10.244.4.0/24] 
	I0920 18:30:40.288729       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:30:40.288854       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:30:40.289000       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0920 18:30:40.289027       1 main.go:322] Node multinode-029872-m03 has CIDR [10.244.4.0/24] 
	I0920 18:30:40.289098       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:30:40.289118       1 main.go:299] handling current node
	I0920 18:30:50.284637       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:30:50.284686       1 main.go:299] handling current node
	I0920 18:30:50.284710       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:30:50.284715       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:30:50.284839       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0920 18:30:50.284861       1 main.go:322] Node multinode-029872-m03 has CIDR [10.244.4.0/24] 
	I0920 18:31:00.292654       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:31:00.292696       1 main.go:299] handling current node
	I0920 18:31:00.292712       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:31:00.292718       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:31:00.292861       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0920 18:31:00.292880       1 main.go:322] Node multinode-029872-m03 has CIDR [10.244.4.0/24] 
	I0920 18:31:10.288691       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:31:10.288835       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:31:10.288988       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0920 18:31:10.289029       1 main.go:322] Node multinode-029872-m03 has CIDR [10.244.4.0/24] 
	I0920 18:31:10.289102       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:31:10.289122       1 main.go:299] handling current node
	
	
	==> kindnet [e48e9537db159e4367676ef9f93e6029c5303f946601f6dd01412cc1d4d35a0f] <==
	I0920 18:33:59.103496       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:33:59.103677       1 main.go:299] handling current node
	I0920 18:33:59.103710       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:33:59.103730       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:33:59.103874       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0920 18:33:59.103982       1 main.go:322] Node multinode-029872-m03 has CIDR [10.244.4.0/24] 
	I0920 18:34:09.102975       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:34:09.103191       1 main.go:299] handling current node
	I0920 18:34:09.103282       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:34:09.103331       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:34:09.103600       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0920 18:34:09.103666       1 main.go:322] Node multinode-029872-m03 has CIDR [10.244.4.0/24] 
	I0920 18:34:19.103871       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:34:19.103990       1 main.go:299] handling current node
	I0920 18:34:19.104027       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:34:19.104035       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:34:19.104257       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0920 18:34:19.104287       1 main.go:322] Node multinode-029872-m03 has CIDR [10.244.2.0/24] 
	I0920 18:34:19.104951       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.173 Flags: [] Table: 0} 
	I0920 18:34:29.104710       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:34:29.104864       1 main.go:299] handling current node
	I0920 18:34:29.104889       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:34:29.104896       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:34:29.105060       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0920 18:34:29.105068       1 main.go:322] Node multinode-029872-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [1804be53eac55fb70afccfdc2b30a9f18eb4952522eb900016095c20d5772daa] <==
	W0920 18:31:15.537413       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.537444       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538003       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538074       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538112       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538148       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538199       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538241       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538275       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538304       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539069       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539102       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539131       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539158       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539191       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539230       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539259       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539288       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539323       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539939       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539973       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.540000       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.540027       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.540056       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.540296       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d9317bb7bef08ea5c7f685012c221f64907dff56faa882dace075d4e2103bc2d] <==
	I0920 18:32:58.144442       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 18:32:58.144483       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 18:32:58.149779       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 18:32:58.150593       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 18:32:58.150732       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 18:32:58.151347       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 18:32:58.151401       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:32:58.157025       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 18:32:58.178603       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 18:32:58.180570       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 18:32:58.180739       1 policy_source.go:224] refreshing policies
	I0920 18:32:58.186802       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 18:32:58.189015       1 aggregator.go:171] initial CRD sync complete...
	I0920 18:32:58.189039       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 18:32:58.189045       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 18:32:58.189052       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:32:58.254680       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 18:32:59.052554       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 18:33:00.149499       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 18:33:00.303260       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 18:33:00.323463       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 18:33:00.416666       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 18:33:00.432896       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 18:33:01.661960       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 18:33:01.757195       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c529d2d1a67659a8727534f484783b19b3e28f5fdea0dc634075c23cb408373a] <==
	I0920 18:28:49.085887       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:28:49.086479       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-029872-m02"
	I0920 18:28:50.449141       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-029872-m02"
	I0920 18:28:50.451071       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-029872-m03\" does not exist"
	I0920 18:28:50.460303       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-029872-m03" podCIDRs=["10.244.4.0/24"]
	I0920 18:28:50.460352       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:28:50.460408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:28:50.484279       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:28:50.601506       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:28:50.936857       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:28:51.410479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:29:00.737344       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:29:09.896757       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:29:09.897101       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-029872-m02"
	I0920 18:29:09.909190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:29:11.327750       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:29:46.348016       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m02"
	I0920 18:29:46.348190       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-029872-m03"
	I0920 18:29:46.366669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m02"
	I0920 18:29:46.404269       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.772381ms"
	I0920 18:29:46.404984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.995µs"
	I0920 18:29:51.403418       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:29:51.414022       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m02"
	I0920 18:29:51.416373       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:30:01.495156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	
	
	==> kube-controller-manager [f90f07dfbad75d09ccff8e3c009a0061d0cf92a7d37b500882f4c886b9021a7e] <==
	I0920 18:33:55.711364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m02"
	I0920 18:33:55.726401       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m02"
	I0920 18:33:55.744816       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.623µs"
	I0920 18:33:55.762761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.291µs"
	I0920 18:33:56.563905       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m02"
	I0920 18:33:59.834709       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.822121ms"
	I0920 18:33:59.835338       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.734µs"
	I0920 18:34:07.421698       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m02"
	I0920 18:34:13.538430       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:13.560023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:13.776704       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-029872-m02"
	I0920 18:34:13.776810       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:14.954912       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-029872-m03\" does not exist"
	I0920 18:34:14.956004       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-029872-m02"
	I0920 18:34:14.979615       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-029872-m03" podCIDRs=["10.244.2.0/24"]
	I0920 18:34:14.979660       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:14.979686       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:15.822169       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:16.183920       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:16.631099       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:25.052746       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:34.576645       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-029872-m02"
	I0920 18:34:34.576762       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:34.586218       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:36.586914       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	
	
	==> kube-proxy [2870a2ffc4d8472bf57d2be157ab5bee83a8ad11be43f8950b4964e68b24dc82] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:26:19.405376       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:26:19.413745       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.208"]
	E0920 18:26:19.414033       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:26:19.443948       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:26:19.444043       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:26:19.444088       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:26:19.446689       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:26:19.446961       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:26:19.447166       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:26:19.448623       1 config.go:199] "Starting service config controller"
	I0920 18:26:19.448807       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:26:19.448884       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:26:19.448920       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:26:19.449440       1 config.go:328] "Starting node config controller"
	I0920 18:26:19.449998       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:26:19.549994       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:26:19.550050       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:26:19.550324       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [63d911e8c81f08e255bbb2b2f2e819d0947975821667db477cfaa24cdad47a8b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:32:58.747986       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:32:58.775688       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.208"]
	E0920 18:32:58.775885       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:32:58.834975       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:32:58.835130       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:32:58.835231       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:32:58.840713       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:32:58.841029       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:32:58.841053       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:32:58.844683       1 config.go:199] "Starting service config controller"
	I0920 18:32:58.844713       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:32:58.844734       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:32:58.844737       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:32:58.845222       1 config.go:328] "Starting node config controller"
	I0920 18:32:58.845248       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:32:58.945218       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:32:58.945293       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:32:58.945306       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [073a54c38674e8692d0308e45f3e05cc9179f3d5b85e00aa5d42bd126e0196ed] <==
	W0920 18:26:10.368192       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:26:10.368888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.371397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:26:10.371470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.413980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:26:10.414368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.442142       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:26:10.442191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.507767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 18:26:10.507894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.572508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 18:26:10.572587       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.653031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 18:26:10.653189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.680026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:26:10.680620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.754269       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 18:26:10.754395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.849967       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:26:10.850267       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 18:26:13.551492       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:31:15.519985       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0920 18:31:15.520135       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0920 18:31:15.520451       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0920 18:31:15.520904       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4e6ed28af14ff16bd8c9770970f355fc2cc818a8f475c5888fc4fdb56c129cb7] <==
	I0920 18:32:56.949929       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:32:58.121626       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:32:58.121769       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:32:58.121799       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:32:58.121867       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:32:58.164260       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:32:58.164434       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:32:58.167999       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:32:58.168060       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:32:58.168216       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:32:58.168320       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:32:58.269929       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:33:05 multinode-029872 kubelet[3178]: E0920 18:33:05.282504    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857185281353676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:33:05 multinode-029872 kubelet[3178]: E0920 18:33:05.282613    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857185281353676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:33:15 multinode-029872 kubelet[3178]: E0920 18:33:15.285674    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857195284558127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:33:15 multinode-029872 kubelet[3178]: E0920 18:33:15.286451    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857195284558127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:33:25 multinode-029872 kubelet[3178]: E0920 18:33:25.289357    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857205288728367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:33:25 multinode-029872 kubelet[3178]: E0920 18:33:25.290032    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857205288728367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:33:35 multinode-029872 kubelet[3178]: E0920 18:33:35.291901    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857215291243832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:33:35 multinode-029872 kubelet[3178]: E0920 18:33:35.291946    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857215291243832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:33:45 multinode-029872 kubelet[3178]: E0920 18:33:45.294219    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857225293804452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:33:45 multinode-029872 kubelet[3178]: E0920 18:33:45.294244    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857225293804452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:33:55 multinode-029872 kubelet[3178]: E0920 18:33:55.279774    3178 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:33:55 multinode-029872 kubelet[3178]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:33:55 multinode-029872 kubelet[3178]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:33:55 multinode-029872 kubelet[3178]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:33:55 multinode-029872 kubelet[3178]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:33:55 multinode-029872 kubelet[3178]: E0920 18:33:55.296894    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857235295977036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:33:55 multinode-029872 kubelet[3178]: E0920 18:33:55.296938    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857235295977036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:05 multinode-029872 kubelet[3178]: E0920 18:34:05.299907    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857245299052284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:05 multinode-029872 kubelet[3178]: E0920 18:34:05.299946    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857245299052284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:15 multinode-029872 kubelet[3178]: E0920 18:34:15.307445    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857255306946787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:15 multinode-029872 kubelet[3178]: E0920 18:34:15.308308    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857255306946787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:25 multinode-029872 kubelet[3178]: E0920 18:34:25.314572    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857265314113506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:25 multinode-029872 kubelet[3178]: E0920 18:34:25.314691    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857265314113506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:35 multinode-029872 kubelet[3178]: E0920 18:34:35.316465    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857275316003808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:35 multinode-029872 kubelet[3178]: E0920 18:34:35.316508    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857275316003808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:34:37.122114  275512 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19679-237658/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-029872 -n multinode-029872
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-029872 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (326.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 stop
E0920 18:35:32.558931  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-029872 stop: exit status 82 (2m0.486892411s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-029872-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-029872 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-029872 status: (18.68865555s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-029872 status --alsologtostderr: (3.391990767s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-029872 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-029872 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-029872 -n multinode-029872
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-029872 logs -n 25: (1.470045347s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-029872 cp multinode-029872-m02:/home/docker/cp-test.txt                       | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872:/home/docker/cp-test_multinode-029872-m02_multinode-029872.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n multinode-029872 sudo cat                                       | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | /home/docker/cp-test_multinode-029872-m02_multinode-029872.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-029872 cp multinode-029872-m02:/home/docker/cp-test.txt                       | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m03:/home/docker/cp-test_multinode-029872-m02_multinode-029872-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n multinode-029872-m03 sudo cat                                   | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | /home/docker/cp-test_multinode-029872-m02_multinode-029872-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-029872 cp testdata/cp-test.txt                                                | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-029872 cp multinode-029872-m03:/home/docker/cp-test.txt                       | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2317310210/001/cp-test_multinode-029872-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-029872 cp multinode-029872-m03:/home/docker/cp-test.txt                       | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872:/home/docker/cp-test_multinode-029872-m03_multinode-029872.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n multinode-029872 sudo cat                                       | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | /home/docker/cp-test_multinode-029872-m03_multinode-029872.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-029872 cp multinode-029872-m03:/home/docker/cp-test.txt                       | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m02:/home/docker/cp-test_multinode-029872-m03_multinode-029872-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n multinode-029872-m02 sudo cat                                   | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | /home/docker/cp-test_multinode-029872-m03_multinode-029872-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-029872 node stop m03                                                          | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	| node    | multinode-029872 node start                                                             | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:29 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-029872                                                                | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:29 UTC |                     |
	| stop    | -p multinode-029872                                                                     | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:29 UTC |                     |
	| start   | -p multinode-029872                                                                     | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:31 UTC | 20 Sep 24 18:34 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-029872                                                                | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:34 UTC |                     |
	| node    | multinode-029872 node delete                                                            | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:34 UTC | 20 Sep 24 18:34 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-029872 stop                                                                   | multinode-029872 | jenkins | v1.34.0 | 20 Sep 24 18:34 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:31:14
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:31:14.512300  274391 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:31:14.512566  274391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:31:14.512574  274391 out.go:358] Setting ErrFile to fd 2...
	I0920 18:31:14.512578  274391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:31:14.512761  274391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:31:14.513408  274391 out.go:352] Setting JSON to false
	I0920 18:31:14.514534  274391 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8017,"bootTime":1726849057,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:31:14.514671  274391 start.go:139] virtualization: kvm guest
	I0920 18:31:14.518049  274391 out.go:177] * [multinode-029872] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:31:14.519555  274391 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:31:14.519557  274391 notify.go:220] Checking for updates...
	I0920 18:31:14.522094  274391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:31:14.523377  274391 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:31:14.524630  274391 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:31:14.526064  274391 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:31:14.527629  274391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:31:14.529363  274391 config.go:182] Loaded profile config "multinode-029872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:31:14.529456  274391 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:31:14.529987  274391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:31:14.530030  274391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:31:14.547591  274391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43771
	I0920 18:31:14.548116  274391 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:31:14.548758  274391 main.go:141] libmachine: Using API Version  1
	I0920 18:31:14.548783  274391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:31:14.549181  274391 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:31:14.549377  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:31:14.587200  274391 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:31:14.588718  274391 start.go:297] selected driver: kvm2
	I0920 18:31:14.588747  274391 start.go:901] validating driver "kvm2" against &{Name:multinode-029872 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-029872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:31:14.588962  274391 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:31:14.589330  274391 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:31:14.589432  274391 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:31:14.606846  274391 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:31:14.607503  274391 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:31:14.607536  274391 cni.go:84] Creating CNI manager for ""
	I0920 18:31:14.607579  274391 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 18:31:14.607632  274391 start.go:340] cluster config:
	{Name:multinode-029872 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-029872 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflo
w:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:31:14.607785  274391 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:31:14.609837  274391 out.go:177] * Starting "multinode-029872" primary control-plane node in "multinode-029872" cluster
	I0920 18:31:14.611207  274391 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:31:14.611255  274391 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:31:14.611263  274391 cache.go:56] Caching tarball of preloaded images
	I0920 18:31:14.611347  274391 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:31:14.611357  274391 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:31:14.611469  274391 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/config.json ...
	I0920 18:31:14.611679  274391 start.go:360] acquireMachinesLock for multinode-029872: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:31:14.611724  274391 start.go:364] duration metric: took 26.527µs to acquireMachinesLock for "multinode-029872"
	I0920 18:31:14.611738  274391 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:31:14.611746  274391 fix.go:54] fixHost starting: 
	I0920 18:31:14.612032  274391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:31:14.612066  274391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:31:14.627430  274391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43243
	I0920 18:31:14.627962  274391 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:31:14.628497  274391 main.go:141] libmachine: Using API Version  1
	I0920 18:31:14.628523  274391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:31:14.628922  274391 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:31:14.629118  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:31:14.629307  274391 main.go:141] libmachine: (multinode-029872) Calling .GetState
	I0920 18:31:14.631088  274391 fix.go:112] recreateIfNeeded on multinode-029872: state=Running err=<nil>
	W0920 18:31:14.631106  274391 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:31:14.633278  274391 out.go:177] * Updating the running kvm2 "multinode-029872" VM ...
	I0920 18:31:14.634682  274391 machine.go:93] provisionDockerMachine start ...
	I0920 18:31:14.634709  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:31:14.634938  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:31:14.637601  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:14.638113  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:31:14.638147  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:14.638271  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:31:14.638458  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:14.638626  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:14.638761  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:31:14.638931  274391 main.go:141] libmachine: Using SSH client type: native
	I0920 18:31:14.639100  274391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0920 18:31:14.639110  274391 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:31:14.756608  274391 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-029872
	
	I0920 18:31:14.756651  274391 main.go:141] libmachine: (multinode-029872) Calling .GetMachineName
	I0920 18:31:14.757048  274391 buildroot.go:166] provisioning hostname "multinode-029872"
	I0920 18:31:14.757082  274391 main.go:141] libmachine: (multinode-029872) Calling .GetMachineName
	I0920 18:31:14.757277  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:31:14.760298  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:14.760700  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:31:14.760741  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:14.760868  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:31:14.761040  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:14.761176  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:14.761317  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:31:14.761498  274391 main.go:141] libmachine: Using SSH client type: native
	I0920 18:31:14.761700  274391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0920 18:31:14.761716  274391 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-029872 && echo "multinode-029872" | sudo tee /etc/hostname
	I0920 18:31:14.896853  274391 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-029872
	
	I0920 18:31:14.896896  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:31:14.900252  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:14.900662  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:31:14.900692  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:14.900925  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:31:14.901143  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:14.901329  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:14.901429  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:31:14.901557  274391 main.go:141] libmachine: Using SSH client type: native
	I0920 18:31:14.901725  274391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0920 18:31:14.901741  274391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-029872' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-029872/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-029872' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:31:15.007312  274391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:31:15.007351  274391 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 18:31:15.007377  274391 buildroot.go:174] setting up certificates
	I0920 18:31:15.007390  274391 provision.go:84] configureAuth start
	I0920 18:31:15.007404  274391 main.go:141] libmachine: (multinode-029872) Calling .GetMachineName
	I0920 18:31:15.007737  274391 main.go:141] libmachine: (multinode-029872) Calling .GetIP
	I0920 18:31:15.011091  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.011461  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:31:15.011492  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.011593  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:31:15.014035  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.014499  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:31:15.014533  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.014681  274391 provision.go:143] copyHostCerts
	I0920 18:31:15.014724  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:31:15.014770  274391 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 18:31:15.014791  274391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:31:15.014973  274391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 18:31:15.015113  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:31:15.015144  274391 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 18:31:15.015152  274391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:31:15.015202  274391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 18:31:15.015271  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:31:15.015300  274391 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 18:31:15.015307  274391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:31:15.015352  274391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 18:31:15.015431  274391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.multinode-029872 san=[127.0.0.1 192.168.39.208 localhost minikube multinode-029872]
	I0920 18:31:15.236001  274391 provision.go:177] copyRemoteCerts
	I0920 18:31:15.236080  274391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:31:15.236106  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:31:15.239425  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.239997  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:31:15.240036  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.240225  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:31:15.240465  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:15.240688  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:31:15.240872  274391 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/multinode-029872/id_rsa Username:docker}
	I0920 18:31:15.319901  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:31:15.319990  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:31:15.345093  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:31:15.345168  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0920 18:31:15.368625  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:31:15.368694  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:31:15.393178  274391 provision.go:87] duration metric: took 385.77142ms to configureAuth
	I0920 18:31:15.393216  274391 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:31:15.393454  274391 config.go:182] Loaded profile config "multinode-029872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:31:15.393565  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:31:15.396696  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.397264  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:31:15.397293  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:31:15.397503  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:31:15.397726  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:15.397951  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:31:15.398167  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:31:15.398359  274391 main.go:141] libmachine: Using SSH client type: native
	I0920 18:31:15.398541  274391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0920 18:31:15.398563  274391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:32:46.158654  274391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:32:46.158694  274391 machine.go:96] duration metric: took 1m31.52399243s to provisionDockerMachine
	I0920 18:32:46.158712  274391 start.go:293] postStartSetup for "multinode-029872" (driver="kvm2")
	I0920 18:32:46.158726  274391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:32:46.158751  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:32:46.159077  274391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:32:46.159109  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:32:46.162214  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.162780  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:32:46.162811  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.163002  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:32:46.163174  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:32:46.163316  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:32:46.163456  274391 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/multinode-029872/id_rsa Username:docker}
	I0920 18:32:46.246439  274391 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:32:46.250595  274391 command_runner.go:130] > NAME=Buildroot
	I0920 18:32:46.250621  274391 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0920 18:32:46.250625  274391 command_runner.go:130] > ID=buildroot
	I0920 18:32:46.250630  274391 command_runner.go:130] > VERSION_ID=2023.02.9
	I0920 18:32:46.250635  274391 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0920 18:32:46.250661  274391 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:32:46.250675  274391 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 18:32:46.250747  274391 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 18:32:46.250822  274391 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 18:32:46.250838  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /etc/ssl/certs/2448492.pem
	I0920 18:32:46.250928  274391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:32:46.260895  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:32:46.287124  274391 start.go:296] duration metric: took 128.392092ms for postStartSetup
	I0920 18:32:46.287183  274391 fix.go:56] duration metric: took 1m31.675436157s for fixHost
	I0920 18:32:46.287211  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:32:46.290280  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.290722  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:32:46.290755  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.290923  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:32:46.291131  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:32:46.291349  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:32:46.291563  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:32:46.291758  274391 main.go:141] libmachine: Using SSH client type: native
	I0920 18:32:46.291939  274391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0920 18:32:46.291949  274391 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:32:46.395028  274391 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726857166.374393175
	
	I0920 18:32:46.395056  274391 fix.go:216] guest clock: 1726857166.374393175
	I0920 18:32:46.395064  274391 fix.go:229] Guest: 2024-09-20 18:32:46.374393175 +0000 UTC Remote: 2024-09-20 18:32:46.287188601 +0000 UTC m=+91.813631050 (delta=87.204574ms)
	I0920 18:32:46.395086  274391 fix.go:200] guest clock delta is within tolerance: 87.204574ms
	I0920 18:32:46.395090  274391 start.go:83] releasing machines lock for "multinode-029872", held for 1m31.78335832s
	I0920 18:32:46.395109  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:32:46.395446  274391 main.go:141] libmachine: (multinode-029872) Calling .GetIP
	I0920 18:32:46.398516  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.398958  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:32:46.398987  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.399183  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:32:46.399825  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:32:46.400072  274391 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:32:46.400156  274391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:32:46.400221  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:32:46.400341  274391 ssh_runner.go:195] Run: cat /version.json
	I0920 18:32:46.400369  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:32:46.403255  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.403434  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.403804  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:32:46.403837  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.403942  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:32:46.404143  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:32:46.404180  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:46.404154  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:32:46.404366  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:32:46.404370  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:32:46.404593  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:32:46.404609  274391 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/multinode-029872/id_rsa Username:docker}
	I0920 18:32:46.404781  274391 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:32:46.404904  274391 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/multinode-029872/id_rsa Username:docker}
	I0920 18:32:46.478609  274391 command_runner.go:130] > {"iso_version": "v1.34.0-1726481713-19649", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "fcd4ba3dbb1ef408e3a4b79c864df2496ddd3848"}
	I0920 18:32:46.478766  274391 ssh_runner.go:195] Run: systemctl --version
	I0920 18:32:46.520173  274391 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0920 18:32:46.520913  274391 command_runner.go:130] > systemd 252 (252)
	I0920 18:32:46.520946  274391 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0920 18:32:46.521002  274391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:32:46.681155  274391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 18:32:46.688485  274391 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0920 18:32:46.688921  274391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:32:46.688995  274391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:32:46.698081  274391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 18:32:46.698109  274391 start.go:495] detecting cgroup driver to use...
	I0920 18:32:46.698170  274391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:32:46.715642  274391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:32:46.729899  274391 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:32:46.729985  274391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:32:46.744093  274391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:32:46.757403  274391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:32:46.906815  274391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:32:47.063516  274391 docker.go:233] disabling docker service ...
	I0920 18:32:47.063603  274391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:32:47.083329  274391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:32:47.096999  274391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:32:47.253856  274391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:32:47.414344  274391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:32:47.427978  274391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:32:47.447426  274391 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0920 18:32:47.447473  274391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:32:47.447519  274391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:32:47.457457  274391 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:32:47.457526  274391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:32:47.467676  274391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:32:47.477545  274391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:32:47.487270  274391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:32:47.497375  274391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:32:47.507026  274391 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:32:47.517243  274391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:32:47.527032  274391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:32:47.535601  274391 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0920 18:32:47.535919  274391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:32:47.545102  274391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:32:47.680908  274391 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:32:52.425651  274391 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.744685776s)
	I0920 18:32:52.425687  274391 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:32:52.425746  274391 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:32:52.430418  274391 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0920 18:32:52.430444  274391 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0920 18:32:52.430451  274391 command_runner.go:130] > Device: 0,22	Inode: 1324        Links: 1
	I0920 18:32:52.430458  274391 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 18:32:52.430463  274391 command_runner.go:130] > Access: 2024-09-20 18:32:52.288300714 +0000
	I0920 18:32:52.430469  274391 command_runner.go:130] > Modify: 2024-09-20 18:32:52.288300714 +0000
	I0920 18:32:52.430476  274391 command_runner.go:130] > Change: 2024-09-20 18:32:52.288300714 +0000
	I0920 18:32:52.430482  274391 command_runner.go:130] >  Birth: -
	I0920 18:32:52.430504  274391 start.go:563] Will wait 60s for crictl version
	I0920 18:32:52.430557  274391 ssh_runner.go:195] Run: which crictl
	I0920 18:32:52.434188  274391 command_runner.go:130] > /usr/bin/crictl
	I0920 18:32:52.434250  274391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:32:52.473166  274391 command_runner.go:130] > Version:  0.1.0
	I0920 18:32:52.473197  274391 command_runner.go:130] > RuntimeName:  cri-o
	I0920 18:32:52.473204  274391 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0920 18:32:52.473212  274391 command_runner.go:130] > RuntimeApiVersion:  v1
	I0920 18:32:52.473289  274391 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:32:52.473362  274391 ssh_runner.go:195] Run: crio --version
	I0920 18:32:52.498836  274391 command_runner.go:130] > crio version 1.29.1
	I0920 18:32:52.498867  274391 command_runner.go:130] > Version:        1.29.1
	I0920 18:32:52.498873  274391 command_runner.go:130] > GitCommit:      unknown
	I0920 18:32:52.498879  274391 command_runner.go:130] > GitCommitDate:  unknown
	I0920 18:32:52.498885  274391 command_runner.go:130] > GitTreeState:   clean
	I0920 18:32:52.498892  274391 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0920 18:32:52.498897  274391 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 18:32:52.498900  274391 command_runner.go:130] > Compiler:       gc
	I0920 18:32:52.498905  274391 command_runner.go:130] > Platform:       linux/amd64
	I0920 18:32:52.498916  274391 command_runner.go:130] > Linkmode:       dynamic
	I0920 18:32:52.498921  274391 command_runner.go:130] > BuildTags:      
	I0920 18:32:52.498925  274391 command_runner.go:130] >   containers_image_ostree_stub
	I0920 18:32:52.498936  274391 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 18:32:52.498943  274391 command_runner.go:130] >   btrfs_noversion
	I0920 18:32:52.498950  274391 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 18:32:52.498957  274391 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 18:32:52.498965  274391 command_runner.go:130] >   seccomp
	I0920 18:32:52.498973  274391 command_runner.go:130] > LDFlags:          unknown
	I0920 18:32:52.498982  274391 command_runner.go:130] > SeccompEnabled:   true
	I0920 18:32:52.498989  274391 command_runner.go:130] > AppArmorEnabled:  false
	I0920 18:32:52.500295  274391 ssh_runner.go:195] Run: crio --version
	I0920 18:32:52.528483  274391 command_runner.go:130] > crio version 1.29.1
	I0920 18:32:52.528517  274391 command_runner.go:130] > Version:        1.29.1
	I0920 18:32:52.528525  274391 command_runner.go:130] > GitCommit:      unknown
	I0920 18:32:52.528532  274391 command_runner.go:130] > GitCommitDate:  unknown
	I0920 18:32:52.528537  274391 command_runner.go:130] > GitTreeState:   clean
	I0920 18:32:52.528546  274391 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0920 18:32:52.528552  274391 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 18:32:52.528556  274391 command_runner.go:130] > Compiler:       gc
	I0920 18:32:52.528561  274391 command_runner.go:130] > Platform:       linux/amd64
	I0920 18:32:52.528570  274391 command_runner.go:130] > Linkmode:       dynamic
	I0920 18:32:52.528577  274391 command_runner.go:130] > BuildTags:      
	I0920 18:32:52.528583  274391 command_runner.go:130] >   containers_image_ostree_stub
	I0920 18:32:52.528589  274391 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 18:32:52.528595  274391 command_runner.go:130] >   btrfs_noversion
	I0920 18:32:52.528601  274391 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 18:32:52.528609  274391 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 18:32:52.528615  274391 command_runner.go:130] >   seccomp
	I0920 18:32:52.528626  274391 command_runner.go:130] > LDFlags:          unknown
	I0920 18:32:52.528632  274391 command_runner.go:130] > SeccompEnabled:   true
	I0920 18:32:52.528637  274391 command_runner.go:130] > AppArmorEnabled:  false
	I0920 18:32:52.530748  274391 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:32:52.532212  274391 main.go:141] libmachine: (multinode-029872) Calling .GetIP
	I0920 18:32:52.535002  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:52.535487  274391 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:32:52.535520  274391 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:32:52.535764  274391 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:32:52.539998  274391 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0920 18:32:52.540223  274391 kubeadm.go:883] updating cluster {Name:multinode-029872 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-029872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadg
et:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:32:52.540380  274391 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:32:52.540428  274391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:32:52.582828  274391 command_runner.go:130] > {
	I0920 18:32:52.582862  274391 command_runner.go:130] >   "images": [
	I0920 18:32:52.582869  274391 command_runner.go:130] >     {
	I0920 18:32:52.582881  274391 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 18:32:52.582888  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.582897  274391 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 18:32:52.582902  274391 command_runner.go:130] >       ],
	I0920 18:32:52.582909  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.582921  274391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 18:32:52.582955  274391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 18:32:52.582963  274391 command_runner.go:130] >       ],
	I0920 18:32:52.582968  274391 command_runner.go:130] >       "size": "87190579",
	I0920 18:32:52.582972  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.582976  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.582984  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.582992  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.582995  274391 command_runner.go:130] >     },
	I0920 18:32:52.582999  274391 command_runner.go:130] >     {
	I0920 18:32:52.583004  274391 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 18:32:52.583008  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583013  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 18:32:52.583017  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583021  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583028  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 18:32:52.583037  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 18:32:52.583041  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583046  274391 command_runner.go:130] >       "size": "1363676",
	I0920 18:32:52.583050  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.583058  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583067  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583073  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583083  274391 command_runner.go:130] >     },
	I0920 18:32:52.583089  274391 command_runner.go:130] >     {
	I0920 18:32:52.583101  274391 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 18:32:52.583110  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583119  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 18:32:52.583127  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583133  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583145  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 18:32:52.583160  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 18:32:52.583167  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583175  274391 command_runner.go:130] >       "size": "31470524",
	I0920 18:32:52.583182  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.583188  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583194  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583202  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583205  274391 command_runner.go:130] >     },
	I0920 18:32:52.583209  274391 command_runner.go:130] >     {
	I0920 18:32:52.583216  274391 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 18:32:52.583220  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583225  274391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 18:32:52.583229  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583235  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583245  274391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 18:32:52.583262  274391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 18:32:52.583267  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583272  274391 command_runner.go:130] >       "size": "63273227",
	I0920 18:32:52.583277  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.583281  274391 command_runner.go:130] >       "username": "nonroot",
	I0920 18:32:52.583284  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583288  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583292  274391 command_runner.go:130] >     },
	I0920 18:32:52.583295  274391 command_runner.go:130] >     {
	I0920 18:32:52.583301  274391 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 18:32:52.583307  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583312  274391 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 18:32:52.583317  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583321  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583330  274391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 18:32:52.583339  274391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 18:32:52.583343  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583347  274391 command_runner.go:130] >       "size": "149009664",
	I0920 18:32:52.583350  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.583357  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.583360  274391 command_runner.go:130] >       },
	I0920 18:32:52.583364  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583368  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583372  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583376  274391 command_runner.go:130] >     },
	I0920 18:32:52.583379  274391 command_runner.go:130] >     {
	I0920 18:32:52.583385  274391 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 18:32:52.583391  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583396  274391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 18:32:52.583401  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583405  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583412  274391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 18:32:52.583418  274391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 18:32:52.583422  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583425  274391 command_runner.go:130] >       "size": "95237600",
	I0920 18:32:52.583429  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.583434  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.583436  274391 command_runner.go:130] >       },
	I0920 18:32:52.583440  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583443  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583447  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583450  274391 command_runner.go:130] >     },
	I0920 18:32:52.583453  274391 command_runner.go:130] >     {
	I0920 18:32:52.583459  274391 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 18:32:52.583462  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583467  274391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 18:32:52.583471  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583474  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583481  274391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 18:32:52.583490  274391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 18:32:52.583494  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583498  274391 command_runner.go:130] >       "size": "89437508",
	I0920 18:32:52.583501  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.583507  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.583513  274391 command_runner.go:130] >       },
	I0920 18:32:52.583517  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583521  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583525  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583528  274391 command_runner.go:130] >     },
	I0920 18:32:52.583531  274391 command_runner.go:130] >     {
	I0920 18:32:52.583537  274391 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 18:32:52.583543  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583547  274391 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 18:32:52.583550  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583555  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583572  274391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 18:32:52.583581  274391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 18:32:52.583585  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583589  274391 command_runner.go:130] >       "size": "92733849",
	I0920 18:32:52.583594  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.583598  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583601  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583605  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583608  274391 command_runner.go:130] >     },
	I0920 18:32:52.583611  274391 command_runner.go:130] >     {
	I0920 18:32:52.583621  274391 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 18:32:52.583625  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583629  274391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 18:32:52.583632  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583636  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583643  274391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 18:32:52.583649  274391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 18:32:52.583653  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583660  274391 command_runner.go:130] >       "size": "68420934",
	I0920 18:32:52.583664  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.583668  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.583672  274391 command_runner.go:130] >       },
	I0920 18:32:52.583676  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583679  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583683  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.583686  274391 command_runner.go:130] >     },
	I0920 18:32:52.583689  274391 command_runner.go:130] >     {
	I0920 18:32:52.583695  274391 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 18:32:52.583701  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.583705  274391 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 18:32:52.583709  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583713  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.583719  274391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 18:32:52.583728  274391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 18:32:52.583732  274391 command_runner.go:130] >       ],
	I0920 18:32:52.583736  274391 command_runner.go:130] >       "size": "742080",
	I0920 18:32:52.583740  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.583746  274391 command_runner.go:130] >         "value": "65535"
	I0920 18:32:52.583751  274391 command_runner.go:130] >       },
	I0920 18:32:52.583754  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.583758  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.583762  274391 command_runner.go:130] >       "pinned": true
	I0920 18:32:52.583765  274391 command_runner.go:130] >     }
	I0920 18:32:52.583769  274391 command_runner.go:130] >   ]
	I0920 18:32:52.583772  274391 command_runner.go:130] > }
	I0920 18:32:52.583997  274391 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:32:52.584013  274391 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:32:52.584075  274391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:32:52.615543  274391 command_runner.go:130] > {
	I0920 18:32:52.615569  274391 command_runner.go:130] >   "images": [
	I0920 18:32:52.615575  274391 command_runner.go:130] >     {
	I0920 18:32:52.615586  274391 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 18:32:52.615600  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.615608  274391 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 18:32:52.615613  274391 command_runner.go:130] >       ],
	I0920 18:32:52.615619  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.615631  274391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 18:32:52.615641  274391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 18:32:52.615646  274391 command_runner.go:130] >       ],
	I0920 18:32:52.615653  274391 command_runner.go:130] >       "size": "87190579",
	I0920 18:32:52.615664  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.615673  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.615686  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.615694  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.615700  274391 command_runner.go:130] >     },
	I0920 18:32:52.615706  274391 command_runner.go:130] >     {
	I0920 18:32:52.615716  274391 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 18:32:52.615725  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.615733  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 18:32:52.615739  274391 command_runner.go:130] >       ],
	I0920 18:32:52.615749  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.615764  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 18:32:52.615780  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 18:32:52.615789  274391 command_runner.go:130] >       ],
	I0920 18:32:52.615796  274391 command_runner.go:130] >       "size": "1363676",
	I0920 18:32:52.615805  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.615816  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.615825  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.615832  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.615840  274391 command_runner.go:130] >     },
	I0920 18:32:52.615845  274391 command_runner.go:130] >     {
	I0920 18:32:52.615854  274391 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 18:32:52.615863  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.615876  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 18:32:52.615885  274391 command_runner.go:130] >       ],
	I0920 18:32:52.615892  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.615908  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 18:32:52.615924  274391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 18:32:52.615932  274391 command_runner.go:130] >       ],
	I0920 18:32:52.615946  274391 command_runner.go:130] >       "size": "31470524",
	I0920 18:32:52.615955  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.615962  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.615972  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.615981  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.615989  274391 command_runner.go:130] >     },
	I0920 18:32:52.615996  274391 command_runner.go:130] >     {
	I0920 18:32:52.616009  274391 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 18:32:52.616018  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.616028  274391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 18:32:52.616036  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616044  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.616060  274391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 18:32:52.616080  274391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 18:32:52.616092  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616103  274391 command_runner.go:130] >       "size": "63273227",
	I0920 18:32:52.616113  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.616123  274391 command_runner.go:130] >       "username": "nonroot",
	I0920 18:32:52.616131  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.616140  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.616146  274391 command_runner.go:130] >     },
	I0920 18:32:52.616154  274391 command_runner.go:130] >     {
	I0920 18:32:52.616166  274391 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 18:32:52.616176  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.616186  274391 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 18:32:52.616194  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616202  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.616217  274391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 18:32:52.616231  274391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 18:32:52.616239  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616247  274391 command_runner.go:130] >       "size": "149009664",
	I0920 18:32:52.616256  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.616264  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.616273  274391 command_runner.go:130] >       },
	I0920 18:32:52.616282  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.616292  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.616302  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.616308  274391 command_runner.go:130] >     },
	I0920 18:32:52.616314  274391 command_runner.go:130] >     {
	I0920 18:32:52.616326  274391 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 18:32:52.616336  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.616347  274391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 18:32:52.616355  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616362  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.616377  274391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 18:32:52.616393  274391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 18:32:52.616402  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616411  274391 command_runner.go:130] >       "size": "95237600",
	I0920 18:32:52.616420  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.616428  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.616436  274391 command_runner.go:130] >       },
	I0920 18:32:52.616443  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.616452  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.616462  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.616467  274391 command_runner.go:130] >     },
	I0920 18:32:52.616473  274391 command_runner.go:130] >     {
	I0920 18:32:52.616485  274391 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 18:32:52.616501  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.616513  274391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 18:32:52.616521  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616529  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.616544  274391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 18:32:52.616560  274391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 18:32:52.616570  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616579  274391 command_runner.go:130] >       "size": "89437508",
	I0920 18:32:52.616588  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.616622  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.616628  274391 command_runner.go:130] >       },
	I0920 18:32:52.616634  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.616640  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.616650  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.616656  274391 command_runner.go:130] >     },
	I0920 18:32:52.616662  274391 command_runner.go:130] >     {
	I0920 18:32:52.616673  274391 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 18:32:52.616683  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.616693  274391 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 18:32:52.616701  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616709  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.616737  274391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 18:32:52.616753  274391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 18:32:52.616763  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616777  274391 command_runner.go:130] >       "size": "92733849",
	I0920 18:32:52.616787  274391 command_runner.go:130] >       "uid": null,
	I0920 18:32:52.616796  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.616804  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.616813  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.616822  274391 command_runner.go:130] >     },
	I0920 18:32:52.616831  274391 command_runner.go:130] >     {
	I0920 18:32:52.616842  274391 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 18:32:52.616851  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.616860  274391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 18:32:52.616868  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616875  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.616891  274391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 18:32:52.616907  274391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 18:32:52.616915  274391 command_runner.go:130] >       ],
	I0920 18:32:52.616924  274391 command_runner.go:130] >       "size": "68420934",
	I0920 18:32:52.616933  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.616940  274391 command_runner.go:130] >         "value": "0"
	I0920 18:32:52.616948  274391 command_runner.go:130] >       },
	I0920 18:32:52.616956  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.616965  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.616972  274391 command_runner.go:130] >       "pinned": false
	I0920 18:32:52.616980  274391 command_runner.go:130] >     },
	I0920 18:32:52.616987  274391 command_runner.go:130] >     {
	I0920 18:32:52.616998  274391 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 18:32:52.617007  274391 command_runner.go:130] >       "repoTags": [
	I0920 18:32:52.617016  274391 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 18:32:52.617023  274391 command_runner.go:130] >       ],
	I0920 18:32:52.617031  274391 command_runner.go:130] >       "repoDigests": [
	I0920 18:32:52.617045  274391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 18:32:52.617060  274391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 18:32:52.617069  274391 command_runner.go:130] >       ],
	I0920 18:32:52.617078  274391 command_runner.go:130] >       "size": "742080",
	I0920 18:32:52.617088  274391 command_runner.go:130] >       "uid": {
	I0920 18:32:52.617098  274391 command_runner.go:130] >         "value": "65535"
	I0920 18:32:52.617104  274391 command_runner.go:130] >       },
	I0920 18:32:52.617112  274391 command_runner.go:130] >       "username": "",
	I0920 18:32:52.617122  274391 command_runner.go:130] >       "spec": null,
	I0920 18:32:52.617130  274391 command_runner.go:130] >       "pinned": true
	I0920 18:32:52.617137  274391 command_runner.go:130] >     }
	I0920 18:32:52.617144  274391 command_runner.go:130] >   ]
	I0920 18:32:52.617150  274391 command_runner.go:130] > }
	I0920 18:32:52.617272  274391 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:32:52.617285  274391 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:32:52.617296  274391 kubeadm.go:934] updating node { 192.168.39.208 8443 v1.31.1 crio true true} ...
	I0920 18:32:52.617409  274391 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-029872 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-029872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:32:52.617493  274391 ssh_runner.go:195] Run: crio config
	I0920 18:32:52.656454  274391 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0920 18:32:52.656485  274391 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0920 18:32:52.656495  274391 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0920 18:32:52.656499  274391 command_runner.go:130] > #
	I0920 18:32:52.656509  274391 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0920 18:32:52.656516  274391 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0920 18:32:52.656526  274391 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0920 18:32:52.656535  274391 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0920 18:32:52.656540  274391 command_runner.go:130] > # reload'.
	I0920 18:32:52.656551  274391 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0920 18:32:52.656562  274391 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0920 18:32:52.656575  274391 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0920 18:32:52.656585  274391 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0920 18:32:52.656593  274391 command_runner.go:130] > [crio]
	I0920 18:32:52.656606  274391 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0920 18:32:52.656615  274391 command_runner.go:130] > # containers images, in this directory.
	I0920 18:32:52.656626  274391 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0920 18:32:52.656641  274391 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0920 18:32:52.656724  274391 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0920 18:32:52.656752  274391 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0920 18:32:52.656763  274391 command_runner.go:130] > # imagestore = ""
	I0920 18:32:52.656777  274391 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0920 18:32:52.656787  274391 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0920 18:32:52.656949  274391 command_runner.go:130] > storage_driver = "overlay"
	I0920 18:32:52.656964  274391 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0920 18:32:52.656972  274391 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0920 18:32:52.656980  274391 command_runner.go:130] > storage_option = [
	I0920 18:32:52.657110  274391 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0920 18:32:52.657137  274391 command_runner.go:130] > ]
	I0920 18:32:52.657153  274391 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0920 18:32:52.657167  274391 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0920 18:32:52.657356  274391 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0920 18:32:52.657377  274391 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0920 18:32:52.657387  274391 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0920 18:32:52.657395  274391 command_runner.go:130] > # always happen on a node reboot
	I0920 18:32:52.657600  274391 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0920 18:32:52.657616  274391 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0920 18:32:52.657622  274391 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0920 18:32:52.657627  274391 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0920 18:32:52.657747  274391 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0920 18:32:52.657762  274391 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0920 18:32:52.657772  274391 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0920 18:32:52.658236  274391 command_runner.go:130] > # internal_wipe = true
	I0920 18:32:52.658248  274391 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0920 18:32:52.658254  274391 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0920 18:32:52.658628  274391 command_runner.go:130] > # internal_repair = false
	I0920 18:32:52.658643  274391 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0920 18:32:52.658653  274391 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0920 18:32:52.658659  274391 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0920 18:32:52.658935  274391 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0920 18:32:52.658948  274391 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0920 18:32:52.658955  274391 command_runner.go:130] > [crio.api]
	I0920 18:32:52.658960  274391 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0920 18:32:52.659181  274391 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0920 18:32:52.659191  274391 command_runner.go:130] > # IP address on which the stream server will listen.
	I0920 18:32:52.659332  274391 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0920 18:32:52.659347  274391 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0920 18:32:52.659355  274391 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0920 18:32:52.659543  274391 command_runner.go:130] > # stream_port = "0"
	I0920 18:32:52.659561  274391 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0920 18:32:52.659773  274391 command_runner.go:130] > # stream_enable_tls = false
	I0920 18:32:52.659788  274391 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0920 18:32:52.660122  274391 command_runner.go:130] > # stream_idle_timeout = ""
	I0920 18:32:52.660134  274391 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0920 18:32:52.660140  274391 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0920 18:32:52.660143  274391 command_runner.go:130] > # minutes.
	I0920 18:32:52.660276  274391 command_runner.go:130] > # stream_tls_cert = ""
	I0920 18:32:52.660296  274391 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0920 18:32:52.660307  274391 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0920 18:32:52.660527  274391 command_runner.go:130] > # stream_tls_key = ""
	I0920 18:32:52.660550  274391 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0920 18:32:52.660561  274391 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0920 18:32:52.660580  274391 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0920 18:32:52.660657  274391 command_runner.go:130] > # stream_tls_ca = ""
	I0920 18:32:52.660684  274391 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 18:32:52.660798  274391 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0920 18:32:52.660814  274391 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 18:32:52.660980  274391 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0920 18:32:52.661001  274391 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0920 18:32:52.661011  274391 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0920 18:32:52.661018  274391 command_runner.go:130] > [crio.runtime]
	I0920 18:32:52.661028  274391 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0920 18:32:52.661041  274391 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0920 18:32:52.661050  274391 command_runner.go:130] > # "nofile=1024:2048"
	I0920 18:32:52.661060  274391 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0920 18:32:52.661080  274391 command_runner.go:130] > # default_ulimits = [
	I0920 18:32:52.661214  274391 command_runner.go:130] > # ]
	I0920 18:32:52.661234  274391 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0920 18:32:52.661444  274391 command_runner.go:130] > # no_pivot = false
	I0920 18:32:52.661461  274391 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0920 18:32:52.661472  274391 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0920 18:32:52.661693  274391 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0920 18:32:52.661706  274391 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0920 18:32:52.661711  274391 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0920 18:32:52.661717  274391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 18:32:52.661812  274391 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0920 18:32:52.661820  274391 command_runner.go:130] > # Cgroup setting for conmon
	I0920 18:32:52.661827  274391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0920 18:32:52.662052  274391 command_runner.go:130] > conmon_cgroup = "pod"
	I0920 18:32:52.662066  274391 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0920 18:32:52.662071  274391 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0920 18:32:52.662078  274391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 18:32:52.662082  274391 command_runner.go:130] > conmon_env = [
	I0920 18:32:52.662177  274391 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 18:32:52.662186  274391 command_runner.go:130] > ]
	I0920 18:32:52.662195  274391 command_runner.go:130] > # Additional environment variables to set for all the
	I0920 18:32:52.662203  274391 command_runner.go:130] > # containers. These are overridden if set in the
	I0920 18:32:52.662213  274391 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0920 18:32:52.662293  274391 command_runner.go:130] > # default_env = [
	I0920 18:32:52.662409  274391 command_runner.go:130] > # ]
	I0920 18:32:52.662418  274391 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0920 18:32:52.662425  274391 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0920 18:32:52.662639  274391 command_runner.go:130] > # selinux = false
	I0920 18:32:52.662657  274391 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0920 18:32:52.662668  274391 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0920 18:32:52.662677  274391 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0920 18:32:52.662842  274391 command_runner.go:130] > # seccomp_profile = ""
	I0920 18:32:52.662858  274391 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0920 18:32:52.662873  274391 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0920 18:32:52.662886  274391 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0920 18:32:52.662896  274391 command_runner.go:130] > # which might increase security.
	I0920 18:32:52.662907  274391 command_runner.go:130] > # This option is currently deprecated,
	I0920 18:32:52.662916  274391 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0920 18:32:52.663027  274391 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0920 18:32:52.663043  274391 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0920 18:32:52.663054  274391 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0920 18:32:52.663064  274391 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0920 18:32:52.663078  274391 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0920 18:32:52.663088  274391 command_runner.go:130] > # This option supports live configuration reload.
	I0920 18:32:52.663259  274391 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0920 18:32:52.663272  274391 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0920 18:32:52.663281  274391 command_runner.go:130] > # the cgroup blockio controller.
	I0920 18:32:52.663406  274391 command_runner.go:130] > # blockio_config_file = ""
	I0920 18:32:52.663419  274391 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0920 18:32:52.663424  274391 command_runner.go:130] > # blockio parameters.
	I0920 18:32:52.663645  274391 command_runner.go:130] > # blockio_reload = false
	I0920 18:32:52.663664  274391 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0920 18:32:52.663670  274391 command_runner.go:130] > # irqbalance daemon.
	I0920 18:32:52.663966  274391 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0920 18:32:52.663977  274391 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0920 18:32:52.663984  274391 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0920 18:32:52.663995  274391 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0920 18:32:52.664005  274391 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0920 18:32:52.664018  274391 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0920 18:32:52.664026  274391 command_runner.go:130] > # This option supports live configuration reload.
	I0920 18:32:52.664037  274391 command_runner.go:130] > # rdt_config_file = ""
	I0920 18:32:52.664048  274391 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0920 18:32:52.664053  274391 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0920 18:32:52.664074  274391 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0920 18:32:52.664089  274391 command_runner.go:130] > # separate_pull_cgroup = ""
	I0920 18:32:52.664101  274391 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0920 18:32:52.664113  274391 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0920 18:32:52.664123  274391 command_runner.go:130] > # will be added.
	I0920 18:32:52.664128  274391 command_runner.go:130] > # default_capabilities = [
	I0920 18:32:52.664134  274391 command_runner.go:130] > # 	"CHOWN",
	I0920 18:32:52.664149  274391 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0920 18:32:52.664155  274391 command_runner.go:130] > # 	"FSETID",
	I0920 18:32:52.664161  274391 command_runner.go:130] > # 	"FOWNER",
	I0920 18:32:52.664167  274391 command_runner.go:130] > # 	"SETGID",
	I0920 18:32:52.664174  274391 command_runner.go:130] > # 	"SETUID",
	I0920 18:32:52.664180  274391 command_runner.go:130] > # 	"SETPCAP",
	I0920 18:32:52.664191  274391 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0920 18:32:52.664197  274391 command_runner.go:130] > # 	"KILL",
	I0920 18:32:52.664206  274391 command_runner.go:130] > # ]
	I0920 18:32:52.664218  274391 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0920 18:32:52.664233  274391 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0920 18:32:52.664245  274391 command_runner.go:130] > # add_inheritable_capabilities = false
	I0920 18:32:52.664255  274391 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0920 18:32:52.664268  274391 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 18:32:52.664276  274391 command_runner.go:130] > default_sysctls = [
	I0920 18:32:52.664285  274391 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0920 18:32:52.664292  274391 command_runner.go:130] > ]
	I0920 18:32:52.664300  274391 command_runner.go:130] > # List of devices on the host that a
	I0920 18:32:52.664313  274391 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0920 18:32:52.664322  274391 command_runner.go:130] > # allowed_devices = [
	I0920 18:32:52.664329  274391 command_runner.go:130] > # 	"/dev/fuse",
	I0920 18:32:52.664335  274391 command_runner.go:130] > # ]
	I0920 18:32:52.664343  274391 command_runner.go:130] > # List of additional devices. specified as
	I0920 18:32:52.664356  274391 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0920 18:32:52.664367  274391 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0920 18:32:52.664381  274391 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 18:32:52.664387  274391 command_runner.go:130] > # additional_devices = [
	I0920 18:32:52.664396  274391 command_runner.go:130] > # ]
	I0920 18:32:52.664405  274391 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0920 18:32:52.664414  274391 command_runner.go:130] > # cdi_spec_dirs = [
	I0920 18:32:52.664420  274391 command_runner.go:130] > # 	"/etc/cdi",
	I0920 18:32:52.664431  274391 command_runner.go:130] > # 	"/var/run/cdi",
	I0920 18:32:52.664438  274391 command_runner.go:130] > # ]
	I0920 18:32:52.664451  274391 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0920 18:32:52.664463  274391 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0920 18:32:52.664474  274391 command_runner.go:130] > # Defaults to false.
	I0920 18:32:52.664482  274391 command_runner.go:130] > # device_ownership_from_security_context = false
	I0920 18:32:52.664495  274391 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0920 18:32:52.664507  274391 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0920 18:32:52.664516  274391 command_runner.go:130] > # hooks_dir = [
	I0920 18:32:52.664524  274391 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0920 18:32:52.664533  274391 command_runner.go:130] > # ]
	I0920 18:32:52.664542  274391 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0920 18:32:52.664558  274391 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0920 18:32:52.664570  274391 command_runner.go:130] > # its default mounts from the following two files:
	I0920 18:32:52.664576  274391 command_runner.go:130] > #
	I0920 18:32:52.664585  274391 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0920 18:32:52.664598  274391 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0920 18:32:52.664616  274391 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0920 18:32:52.664624  274391 command_runner.go:130] > #
	I0920 18:32:52.664634  274391 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0920 18:32:52.664647  274391 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0920 18:32:52.664659  274391 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0920 18:32:52.664671  274391 command_runner.go:130] > #      only add mounts it finds in this file.
	I0920 18:32:52.664676  274391 command_runner.go:130] > #
	I0920 18:32:52.664686  274391 command_runner.go:130] > # default_mounts_file = ""
	I0920 18:32:52.664694  274391 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0920 18:32:52.664708  274391 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0920 18:32:52.664718  274391 command_runner.go:130] > pids_limit = 1024
	I0920 18:32:52.664728  274391 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0920 18:32:52.664740  274391 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0920 18:32:52.664753  274391 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0920 18:32:52.664766  274391 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0920 18:32:52.664775  274391 command_runner.go:130] > # log_size_max = -1
	I0920 18:32:52.664787  274391 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0920 18:32:52.664798  274391 command_runner.go:130] > # log_to_journald = false
	I0920 18:32:52.664809  274391 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0920 18:32:52.664821  274391 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0920 18:32:52.664832  274391 command_runner.go:130] > # Path to directory for container attach sockets.
	I0920 18:32:52.664841  274391 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0920 18:32:52.664852  274391 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0920 18:32:52.664861  274391 command_runner.go:130] > # bind_mount_prefix = ""
	I0920 18:32:52.664870  274391 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0920 18:32:52.664879  274391 command_runner.go:130] > # read_only = false
	I0920 18:32:52.664889  274391 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0920 18:32:52.664902  274391 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0920 18:32:52.664912  274391 command_runner.go:130] > # live configuration reload.
	I0920 18:32:52.664919  274391 command_runner.go:130] > # log_level = "info"
	I0920 18:32:52.664931  274391 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0920 18:32:52.664940  274391 command_runner.go:130] > # This option supports live configuration reload.
	I0920 18:32:52.664947  274391 command_runner.go:130] > # log_filter = ""
	I0920 18:32:52.664958  274391 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0920 18:32:52.664971  274391 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0920 18:32:52.664978  274391 command_runner.go:130] > # separated by comma.
	I0920 18:32:52.664993  274391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 18:32:52.665002  274391 command_runner.go:130] > # uid_mappings = ""
	I0920 18:32:52.665016  274391 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0920 18:32:52.665029  274391 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0920 18:32:52.665039  274391 command_runner.go:130] > # separated by comma.
	I0920 18:32:52.665050  274391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 18:32:52.665060  274391 command_runner.go:130] > # gid_mappings = ""
	I0920 18:32:52.665070  274391 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0920 18:32:52.665083  274391 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 18:32:52.665094  274391 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 18:32:52.665109  274391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 18:32:52.665119  274391 command_runner.go:130] > # minimum_mappable_uid = -1
	I0920 18:32:52.665129  274391 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0920 18:32:52.665140  274391 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 18:32:52.665151  274391 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 18:32:52.665165  274391 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 18:32:52.665172  274391 command_runner.go:130] > # minimum_mappable_gid = -1
	I0920 18:32:52.665205  274391 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0920 18:32:52.665220  274391 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0920 18:32:52.665228  274391 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0920 18:32:52.665234  274391 command_runner.go:130] > # ctr_stop_timeout = 30
	I0920 18:32:52.665247  274391 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0920 18:32:52.665259  274391 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0920 18:32:52.665270  274391 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0920 18:32:52.665278  274391 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0920 18:32:52.665287  274391 command_runner.go:130] > drop_infra_ctr = false
	I0920 18:32:52.665297  274391 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0920 18:32:52.665309  274391 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0920 18:32:52.665324  274391 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0920 18:32:52.665333  274391 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0920 18:32:52.665344  274391 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0920 18:32:52.665361  274391 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0920 18:32:52.665374  274391 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0920 18:32:52.665383  274391 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0920 18:32:52.665392  274391 command_runner.go:130] > # shared_cpuset = ""
	I0920 18:32:52.665403  274391 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0920 18:32:52.665413  274391 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0920 18:32:52.665423  274391 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0920 18:32:52.665435  274391 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0920 18:32:52.665455  274391 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0920 18:32:52.665468  274391 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0920 18:32:52.665482  274391 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0920 18:32:52.665490  274391 command_runner.go:130] > # enable_criu_support = false
	I0920 18:32:52.665500  274391 command_runner.go:130] > # Enable/disable the generation of the container,
	I0920 18:32:52.665510  274391 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0920 18:32:52.665519  274391 command_runner.go:130] > # enable_pod_events = false
	I0920 18:32:52.665528  274391 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 18:32:52.665538  274391 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 18:32:52.665549  274391 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0920 18:32:52.665562  274391 command_runner.go:130] > # default_runtime = "runc"
	I0920 18:32:52.665573  274391 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0920 18:32:52.665588  274391 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0920 18:32:52.665605  274391 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0920 18:32:52.665620  274391 command_runner.go:130] > # creation as a file is not desired either.
	I0920 18:32:52.665637  274391 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0920 18:32:52.665648  274391 command_runner.go:130] > # the hostname is being managed dynamically.
	I0920 18:32:52.665656  274391 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0920 18:32:52.665663  274391 command_runner.go:130] > # ]
	I0920 18:32:52.665673  274391 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0920 18:32:52.665687  274391 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0920 18:32:52.665697  274391 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0920 18:32:52.665702  274391 command_runner.go:130] > # Each entry in the table should follow the format:
	I0920 18:32:52.665705  274391 command_runner.go:130] > #
	I0920 18:32:52.665717  274391 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0920 18:32:52.665725  274391 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0920 18:32:52.665753  274391 command_runner.go:130] > # runtime_type = "oci"
	I0920 18:32:52.665764  274391 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0920 18:32:52.665772  274391 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0920 18:32:52.665781  274391 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0920 18:32:52.665785  274391 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0920 18:32:52.665793  274391 command_runner.go:130] > # monitor_env = []
	I0920 18:32:52.665801  274391 command_runner.go:130] > # privileged_without_host_devices = false
	I0920 18:32:52.665811  274391 command_runner.go:130] > # allowed_annotations = []
	I0920 18:32:52.665823  274391 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0920 18:32:52.665831  274391 command_runner.go:130] > # Where:
	I0920 18:32:52.665840  274391 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0920 18:32:52.665853  274391 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0920 18:32:52.665866  274391 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0920 18:32:52.665878  274391 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0920 18:32:52.665887  274391 command_runner.go:130] > #   in $PATH.
	I0920 18:32:52.665899  274391 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0920 18:32:52.665929  274391 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0920 18:32:52.665940  274391 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0920 18:32:52.665949  274391 command_runner.go:130] > #   state.
	I0920 18:32:52.665960  274391 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0920 18:32:52.665972  274391 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0920 18:32:52.665985  274391 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0920 18:32:52.665996  274391 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0920 18:32:52.666002  274391 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0920 18:32:52.666014  274391 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0920 18:32:52.666025  274391 command_runner.go:130] > #   The currently recognized values are:
	I0920 18:32:52.666044  274391 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0920 18:32:52.666058  274391 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0920 18:32:52.666070  274391 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0920 18:32:52.666082  274391 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0920 18:32:52.666095  274391 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0920 18:32:52.666107  274391 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0920 18:32:52.666122  274391 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0920 18:32:52.666133  274391 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0920 18:32:52.666146  274391 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0920 18:32:52.666158  274391 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0920 18:32:52.666168  274391 command_runner.go:130] > #   deprecated option "conmon".
	I0920 18:32:52.666180  274391 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0920 18:32:52.666191  274391 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0920 18:32:52.666205  274391 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0920 18:32:52.666216  274391 command_runner.go:130] > #   should be moved to the container's cgroup
	I0920 18:32:52.666229  274391 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0920 18:32:52.666238  274391 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0920 18:32:52.666246  274391 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0920 18:32:52.666258  274391 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0920 18:32:52.666266  274391 command_runner.go:130] > #
	I0920 18:32:52.666274  274391 command_runner.go:130] > # Using the seccomp notifier feature:
	I0920 18:32:52.666283  274391 command_runner.go:130] > #
	I0920 18:32:52.666296  274391 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0920 18:32:52.666311  274391 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0920 18:32:52.666319  274391 command_runner.go:130] > #
	I0920 18:32:52.666333  274391 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0920 18:32:52.666342  274391 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0920 18:32:52.666349  274391 command_runner.go:130] > #
	I0920 18:32:52.666362  274391 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0920 18:32:52.666372  274391 command_runner.go:130] > # feature.
	I0920 18:32:52.666381  274391 command_runner.go:130] > #
	I0920 18:32:52.666395  274391 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0920 18:32:52.666407  274391 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0920 18:32:52.666420  274391 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0920 18:32:52.666429  274391 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0920 18:32:52.666441  274391 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0920 18:32:52.666450  274391 command_runner.go:130] > #
	I0920 18:32:52.666461  274391 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0920 18:32:52.666474  274391 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0920 18:32:52.666482  274391 command_runner.go:130] > #
	I0920 18:32:52.666494  274391 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0920 18:32:52.666506  274391 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0920 18:32:52.666514  274391 command_runner.go:130] > #
	I0920 18:32:52.666523  274391 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0920 18:32:52.666534  274391 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0920 18:32:52.666544  274391 command_runner.go:130] > # limitation.
	I0920 18:32:52.666551  274391 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0920 18:32:52.666562  274391 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0920 18:32:52.666571  274391 command_runner.go:130] > runtime_type = "oci"
	I0920 18:32:52.666580  274391 command_runner.go:130] > runtime_root = "/run/runc"
	I0920 18:32:52.666590  274391 command_runner.go:130] > runtime_config_path = ""
	I0920 18:32:52.666600  274391 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0920 18:32:52.666609  274391 command_runner.go:130] > monitor_cgroup = "pod"
	I0920 18:32:52.666620  274391 command_runner.go:130] > monitor_exec_cgroup = ""
	I0920 18:32:52.666629  274391 command_runner.go:130] > monitor_env = [
	I0920 18:32:52.666641  274391 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 18:32:52.666651  274391 command_runner.go:130] > ]
	I0920 18:32:52.666660  274391 command_runner.go:130] > privileged_without_host_devices = false
	I0920 18:32:52.666673  274391 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0920 18:32:52.666685  274391 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0920 18:32:52.666698  274391 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0920 18:32:52.666713  274391 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0920 18:32:52.666725  274391 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0920 18:32:52.666736  274391 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0920 18:32:52.666755  274391 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0920 18:32:52.666771  274391 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0920 18:32:52.666783  274391 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0920 18:32:52.666798  274391 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0920 18:32:52.666806  274391 command_runner.go:130] > # Example:
	I0920 18:32:52.666813  274391 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0920 18:32:52.666820  274391 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0920 18:32:52.666828  274391 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0920 18:32:52.666839  274391 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0920 18:32:52.666846  274391 command_runner.go:130] > # cpuset = 0
	I0920 18:32:52.666853  274391 command_runner.go:130] > # cpushares = "0-1"
	I0920 18:32:52.666861  274391 command_runner.go:130] > # Where:
	I0920 18:32:52.666868  274391 command_runner.go:130] > # The workload name is workload-type.
	I0920 18:32:52.666882  274391 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0920 18:32:52.666893  274391 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0920 18:32:52.666903  274391 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0920 18:32:52.666914  274391 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0920 18:32:52.666927  274391 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0920 18:32:52.666939  274391 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0920 18:32:52.666953  274391 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0920 18:32:52.666962  274391 command_runner.go:130] > # Default value is set to true
	I0920 18:32:52.666973  274391 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0920 18:32:52.666985  274391 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0920 18:32:52.666992  274391 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0920 18:32:52.666999  274391 command_runner.go:130] > # Default value is set to 'false'
	I0920 18:32:52.667011  274391 command_runner.go:130] > # disable_hostport_mapping = false
	I0920 18:32:52.667025  274391 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0920 18:32:52.667034  274391 command_runner.go:130] > #
	I0920 18:32:52.667046  274391 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0920 18:32:52.667058  274391 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0920 18:32:52.667068  274391 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0920 18:32:52.667075  274391 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0920 18:32:52.667082  274391 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0920 18:32:52.667088  274391 command_runner.go:130] > [crio.image]
	I0920 18:32:52.667098  274391 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0920 18:32:52.667105  274391 command_runner.go:130] > # default_transport = "docker://"
	I0920 18:32:52.667115  274391 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0920 18:32:52.667126  274391 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0920 18:32:52.667135  274391 command_runner.go:130] > # global_auth_file = ""
	I0920 18:32:52.667143  274391 command_runner.go:130] > # The image used to instantiate infra containers.
	I0920 18:32:52.667150  274391 command_runner.go:130] > # This option supports live configuration reload.
	I0920 18:32:52.667157  274391 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0920 18:32:52.667169  274391 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0920 18:32:52.667181  274391 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0920 18:32:52.667192  274391 command_runner.go:130] > # This option supports live configuration reload.
	I0920 18:32:52.667204  274391 command_runner.go:130] > # pause_image_auth_file = ""
	I0920 18:32:52.667244  274391 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0920 18:32:52.667263  274391 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0920 18:32:52.667277  274391 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0920 18:32:52.667286  274391 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0920 18:32:52.667296  274391 command_runner.go:130] > # pause_command = "/pause"
	I0920 18:32:52.667306  274391 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0920 18:32:52.667319  274391 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0920 18:32:52.667331  274391 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0920 18:32:52.667344  274391 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0920 18:32:52.667356  274391 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0920 18:32:52.667369  274391 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0920 18:32:52.667380  274391 command_runner.go:130] > # pinned_images = [
	I0920 18:32:52.667395  274391 command_runner.go:130] > # ]
	I0920 18:32:52.667405  274391 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0920 18:32:52.667415  274391 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0920 18:32:52.667426  274391 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0920 18:32:52.667439  274391 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0920 18:32:52.667449  274391 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0920 18:32:52.667459  274391 command_runner.go:130] > # signature_policy = ""
	I0920 18:32:52.667467  274391 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0920 18:32:52.667481  274391 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0920 18:32:52.667495  274391 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0920 18:32:52.667506  274391 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0920 18:32:52.667512  274391 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0920 18:32:52.667520  274391 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0920 18:32:52.667534  274391 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0920 18:32:52.667548  274391 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0920 18:32:52.667558  274391 command_runner.go:130] > # changing them here.
	I0920 18:32:52.667565  274391 command_runner.go:130] > # insecure_registries = [
	I0920 18:32:52.667572  274391 command_runner.go:130] > # ]
	I0920 18:32:52.667583  274391 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0920 18:32:52.667593  274391 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0920 18:32:52.667598  274391 command_runner.go:130] > # image_volumes = "mkdir"
	I0920 18:32:52.667605  274391 command_runner.go:130] > # Temporary directory to use for storing big files
	I0920 18:32:52.667621  274391 command_runner.go:130] > # big_files_temporary_dir = ""
	I0920 18:32:52.667634  274391 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0920 18:32:52.667642  274391 command_runner.go:130] > # CNI plugins.
	I0920 18:32:52.667648  274391 command_runner.go:130] > [crio.network]
	I0920 18:32:52.667661  274391 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0920 18:32:52.667669  274391 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0920 18:32:52.667678  274391 command_runner.go:130] > # cni_default_network = ""
	I0920 18:32:52.667684  274391 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0920 18:32:52.667693  274391 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0920 18:32:52.667702  274391 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0920 18:32:52.667712  274391 command_runner.go:130] > # plugin_dirs = [
	I0920 18:32:52.667729  274391 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0920 18:32:52.667738  274391 command_runner.go:130] > # ]
	I0920 18:32:52.667747  274391 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0920 18:32:52.667753  274391 command_runner.go:130] > [crio.metrics]
	I0920 18:32:52.667762  274391 command_runner.go:130] > # Globally enable or disable metrics support.
	I0920 18:32:52.667767  274391 command_runner.go:130] > enable_metrics = true
	I0920 18:32:52.667772  274391 command_runner.go:130] > # Specify enabled metrics collectors.
	I0920 18:32:52.667782  274391 command_runner.go:130] > # Per default all metrics are enabled.
	I0920 18:32:52.667796  274391 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0920 18:32:52.667810  274391 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0920 18:32:52.667822  274391 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0920 18:32:52.667832  274391 command_runner.go:130] > # metrics_collectors = [
	I0920 18:32:52.667838  274391 command_runner.go:130] > # 	"operations",
	I0920 18:32:52.667848  274391 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0920 18:32:52.667853  274391 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0920 18:32:52.667856  274391 command_runner.go:130] > # 	"operations_errors",
	I0920 18:32:52.667863  274391 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0920 18:32:52.667871  274391 command_runner.go:130] > # 	"image_pulls_by_name",
	I0920 18:32:52.667879  274391 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0920 18:32:52.667889  274391 command_runner.go:130] > # 	"image_pulls_failures",
	I0920 18:32:52.667895  274391 command_runner.go:130] > # 	"image_pulls_successes",
	I0920 18:32:52.667905  274391 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0920 18:32:52.667911  274391 command_runner.go:130] > # 	"image_layer_reuse",
	I0920 18:32:52.667922  274391 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0920 18:32:52.667932  274391 command_runner.go:130] > # 	"containers_oom_total",
	I0920 18:32:52.667939  274391 command_runner.go:130] > # 	"containers_oom",
	I0920 18:32:52.667943  274391 command_runner.go:130] > # 	"processes_defunct",
	I0920 18:32:52.667948  274391 command_runner.go:130] > # 	"operations_total",
	I0920 18:32:52.667957  274391 command_runner.go:130] > # 	"operations_latency_seconds",
	I0920 18:32:52.667967  274391 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0920 18:32:52.667976  274391 command_runner.go:130] > # 	"operations_errors_total",
	I0920 18:32:52.667984  274391 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0920 18:32:52.667995  274391 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0920 18:32:52.668006  274391 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0920 18:32:52.668016  274391 command_runner.go:130] > # 	"image_pulls_success_total",
	I0920 18:32:52.668023  274391 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0920 18:32:52.668028  274391 command_runner.go:130] > # 	"containers_oom_count_total",
	I0920 18:32:52.668034  274391 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0920 18:32:52.668038  274391 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0920 18:32:52.668041  274391 command_runner.go:130] > # ]
	I0920 18:32:52.668048  274391 command_runner.go:130] > # The port on which the metrics server will listen.
	I0920 18:32:52.668057  274391 command_runner.go:130] > # metrics_port = 9090
	I0920 18:32:52.668066  274391 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0920 18:32:52.668075  274391 command_runner.go:130] > # metrics_socket = ""
	I0920 18:32:52.668084  274391 command_runner.go:130] > # The certificate for the secure metrics server.
	I0920 18:32:52.668096  274391 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0920 18:32:52.668109  274391 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0920 18:32:52.668119  274391 command_runner.go:130] > # certificate on any modification event.
	I0920 18:32:52.668127  274391 command_runner.go:130] > # metrics_cert = ""
	I0920 18:32:52.668133  274391 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0920 18:32:52.668140  274391 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0920 18:32:52.668144  274391 command_runner.go:130] > # metrics_key = ""
	I0920 18:32:52.668150  274391 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0920 18:32:52.668158  274391 command_runner.go:130] > [crio.tracing]
	I0920 18:32:52.668163  274391 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0920 18:32:52.668169  274391 command_runner.go:130] > # enable_tracing = false
	I0920 18:32:52.668174  274391 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0920 18:32:52.668181  274391 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0920 18:32:52.668187  274391 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0920 18:32:52.668193  274391 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0920 18:32:52.668199  274391 command_runner.go:130] > # CRI-O NRI configuration.
	I0920 18:32:52.668207  274391 command_runner.go:130] > [crio.nri]
	I0920 18:32:52.668215  274391 command_runner.go:130] > # Globally enable or disable NRI.
	I0920 18:32:52.668224  274391 command_runner.go:130] > # enable_nri = false
	I0920 18:32:52.668232  274391 command_runner.go:130] > # NRI socket to listen on.
	I0920 18:32:52.668241  274391 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0920 18:32:52.668250  274391 command_runner.go:130] > # NRI plugin directory to use.
	I0920 18:32:52.668260  274391 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0920 18:32:52.668268  274391 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0920 18:32:52.668276  274391 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0920 18:32:52.668282  274391 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0920 18:32:52.668286  274391 command_runner.go:130] > # nri_disable_connections = false
	I0920 18:32:52.668291  274391 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0920 18:32:52.668298  274391 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0920 18:32:52.668303  274391 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0920 18:32:52.668308  274391 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0920 18:32:52.668315  274391 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0920 18:32:52.668321  274391 command_runner.go:130] > [crio.stats]
	I0920 18:32:52.668327  274391 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0920 18:32:52.668335  274391 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0920 18:32:52.668339  274391 command_runner.go:130] > # stats_collection_period = 0
	I0920 18:32:52.668362  274391 command_runner.go:130] ! time="2024-09-20 18:32:52.628065936Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0920 18:32:52.668376  274391 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0920 18:32:52.668457  274391 cni.go:84] Creating CNI manager for ""
	I0920 18:32:52.668468  274391 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 18:32:52.668477  274391 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:32:52.668499  274391 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-029872 NodeName:multinode-029872 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:32:52.668633  274391 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-029872"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:32:52.668697  274391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:32:52.678951  274391 command_runner.go:130] > kubeadm
	I0920 18:32:52.678982  274391 command_runner.go:130] > kubectl
	I0920 18:32:52.678988  274391 command_runner.go:130] > kubelet
	I0920 18:32:52.679011  274391 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:32:52.679069  274391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:32:52.689631  274391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 18:32:52.706305  274391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:32:52.723649  274391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0920 18:32:52.740958  274391 ssh_runner.go:195] Run: grep 192.168.39.208	control-plane.minikube.internal$ /etc/hosts
	I0920 18:32:52.745110  274391 command_runner.go:130] > 192.168.39.208	control-plane.minikube.internal
	I0920 18:32:52.745203  274391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:32:52.887598  274391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:32:52.902213  274391 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872 for IP: 192.168.39.208
	I0920 18:32:52.902245  274391 certs.go:194] generating shared ca certs ...
	I0920 18:32:52.902270  274391 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:32:52.902468  274391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 18:32:52.902529  274391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 18:32:52.902561  274391 certs.go:256] generating profile certs ...
	I0920 18:32:52.902682  274391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/client.key
	I0920 18:32:52.902750  274391 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/apiserver.key.b4211b30
	I0920 18:32:52.902784  274391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/proxy-client.key
	I0920 18:32:52.902796  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:32:52.902813  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:32:52.902831  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:32:52.902849  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:32:52.902866  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:32:52.902885  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:32:52.902902  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:32:52.902919  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:32:52.902971  274391 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 18:32:52.903000  274391 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 18:32:52.903009  274391 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:32:52.903032  274391 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:32:52.903055  274391 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:32:52.903075  274391 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 18:32:52.903112  274391 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:32:52.903139  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem -> /usr/share/ca-certificates/244849.pem
	I0920 18:32:52.903152  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> /usr/share/ca-certificates/2448492.pem
	I0920 18:32:52.903167  274391 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:32:52.903815  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:32:52.928339  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:32:52.953584  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:32:52.978649  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:32:53.002279  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 18:32:53.026816  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:32:53.050562  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:32:53.075369  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/multinode-029872/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:32:53.099457  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 18:32:53.123057  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 18:32:53.146946  274391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:32:53.172837  274391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:32:53.190071  274391 ssh_runner.go:195] Run: openssl version
	I0920 18:32:53.196010  274391 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0920 18:32:53.196109  274391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 18:32:53.207586  274391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 18:32:53.212166  274391 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:32:53.212212  274391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:32:53.212260  274391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 18:32:53.217911  274391 command_runner.go:130] > 51391683
	I0920 18:32:53.217996  274391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 18:32:53.227663  274391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 18:32:53.238887  274391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 18:32:53.243872  274391 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:32:53.243920  274391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:32:53.243965  274391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 18:32:53.249728  274391 command_runner.go:130] > 3ec20f2e
	I0920 18:32:53.249829  274391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:32:53.259760  274391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:32:53.270759  274391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:32:53.275236  274391 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:32:53.275281  274391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:32:53.275336  274391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:32:53.281233  274391 command_runner.go:130] > b5213941
	I0920 18:32:53.281328  274391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:32:53.291514  274391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:32:53.296510  274391 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:32:53.296548  274391 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0920 18:32:53.296557  274391 command_runner.go:130] > Device: 253,1	Inode: 531240      Links: 1
	I0920 18:32:53.296567  274391 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 18:32:53.296577  274391 command_runner.go:130] > Access: 2024-09-20 18:26:03.723344368 +0000
	I0920 18:32:53.296586  274391 command_runner.go:130] > Modify: 2024-09-20 18:26:03.723344368 +0000
	I0920 18:32:53.296594  274391 command_runner.go:130] > Change: 2024-09-20 18:26:03.723344368 +0000
	I0920 18:32:53.296599  274391 command_runner.go:130] >  Birth: 2024-09-20 18:26:03.723344368 +0000
	I0920 18:32:53.296677  274391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:32:53.303122  274391 command_runner.go:130] > Certificate will not expire
	I0920 18:32:53.303197  274391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:32:53.309196  274391 command_runner.go:130] > Certificate will not expire
	I0920 18:32:53.309296  274391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:32:53.315161  274391 command_runner.go:130] > Certificate will not expire
	I0920 18:32:53.315260  274391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:32:53.321259  274391 command_runner.go:130] > Certificate will not expire
	I0920 18:32:53.321431  274391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:32:53.327389  274391 command_runner.go:130] > Certificate will not expire
	I0920 18:32:53.327562  274391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:32:53.333226  274391 command_runner.go:130] > Certificate will not expire
	I0920 18:32:53.333378  274391 kubeadm.go:392] StartCluster: {Name:multinode-029872 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-029872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:
false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:32:53.333521  274391 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:32:53.333573  274391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:32:53.376062  274391 command_runner.go:130] > a3fb0bb7a4b9fc35b775e931ef2a1bbfbec34e679ca6b6d8fb7c78ac59be2289
	I0920 18:32:53.376094  274391 command_runner.go:130] > 3897ce884a96e9c37e2ae2f7bed67590af6954a8e687b9ba88a01d22e29a3246
	I0920 18:32:53.376103  274391 command_runner.go:130] > b35d8551f8a920c627d5b0364473c8d5bc743cf417dce9937c0769445051c907
	I0920 18:32:53.376119  274391 command_runner.go:130] > 2870a2ffc4d8472bf57d2be157ab5bee83a8ad11be43f8950b4964e68b24dc82
	I0920 18:32:53.376126  274391 command_runner.go:130] > 073a54c38674e8692d0308e45f3e05cc9179f3d5b85e00aa5d42bd126e0196ed
	I0920 18:32:53.376134  274391 command_runner.go:130] > 1804be53eac55fb70afccfdc2b30a9f18eb4952522eb900016095c20d5772daa
	I0920 18:32:53.376141  274391 command_runner.go:130] > c529d2d1a67659a8727534f484783b19b3e28f5fdea0dc634075c23cb408373a
	I0920 18:32:53.376150  274391 command_runner.go:130] > 705d3dd400a4ba3de60e31139e64e3ed80bcb3d04c7dac4ca65633badd43a643
	I0920 18:32:53.376178  274391 cri.go:89] found id: "a3fb0bb7a4b9fc35b775e931ef2a1bbfbec34e679ca6b6d8fb7c78ac59be2289"
	I0920 18:32:53.376189  274391 cri.go:89] found id: "3897ce884a96e9c37e2ae2f7bed67590af6954a8e687b9ba88a01d22e29a3246"
	I0920 18:32:53.376193  274391 cri.go:89] found id: "b35d8551f8a920c627d5b0364473c8d5bc743cf417dce9937c0769445051c907"
	I0920 18:32:53.376199  274391 cri.go:89] found id: "2870a2ffc4d8472bf57d2be157ab5bee83a8ad11be43f8950b4964e68b24dc82"
	I0920 18:32:53.376207  274391 cri.go:89] found id: "073a54c38674e8692d0308e45f3e05cc9179f3d5b85e00aa5d42bd126e0196ed"
	I0920 18:32:53.376214  274391 cri.go:89] found id: "1804be53eac55fb70afccfdc2b30a9f18eb4952522eb900016095c20d5772daa"
	I0920 18:32:53.376219  274391 cri.go:89] found id: "c529d2d1a67659a8727534f484783b19b3e28f5fdea0dc634075c23cb408373a"
	I0920 18:32:53.376224  274391 cri.go:89] found id: "705d3dd400a4ba3de60e31139e64e3ed80bcb3d04c7dac4ca65633badd43a643"
	I0920 18:32:53.376228  274391 cri.go:89] found id: ""
	I0920 18:32:53.376333  274391 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.302842120Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857424302811612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=133f93a5-215e-4f11-b922-150446c46148 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.303411676Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfc47247-5023-4669-9fc2-63f4ad3b474a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.303497586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfc47247-5023-4669-9fc2-63f4ad3b474a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.304471562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1325992f968925a7e0bd8ca36eb91f0be08eda48728e64ed2323ec77bf2f5b0,PodSandboxId:6ca8890cc99c3b272bf592253c07377789135a7a6de89eae7beb4f99f7572118,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726857212468347801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1eeb0e023a3b826ce235ee9394ed39e834b1da0bffac4611d023fe0bd4d655,PodSandboxId:234ae40772716825c09687a0cf35c83a487bd28a84d425ee7262c589f92026f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857178727556714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec4fb8debfebe359a87d116ad45d4dc578db044a0e0d812c62a956fe01ffbb1c,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857178517786604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90f07dfbad75d09ccff8e3c009a0061d0cf92a7d37b500882f4c886b9021a7e,PodSandboxId:6243608fd21a5a40bae600a7f50b44c7ba0ff09300ba771b16cf662acd47d9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857175917382785,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6ed28af14ff16bd8c9770970f355fc2cc818a8f475c5888fc4fdb56c129cb7,PodSandboxId:7790f09b328412d22d5f040b38f6d86121d85423c5db6f64d97f9fa1082a9ae4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857175888286163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da
0460aa0e8574bc96c14fafa16f14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9317bb7bef08ea5c7f685012c221f64907dff56faa882dace075d4e2103bc2d,PodSandboxId:ca278c734be154c51b62847db506962be86313eb947f2a6a3622ef9cd52a0e48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857175866848551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18d
c98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e48e9537db159e4367676ef9f93e6029c5303f946601f6dd01412cc1d4d35a0f,PodSandboxId:c8c9f8c7d4b6e06676c4e73b0b9bcd256ead338891b81cc85ff430e9adde0d70,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726857173916456917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d911e8c81f08e255bbb2b2f2e819d0947975821667db477cfaa24cdad47a8b,PodSandboxId:f56e0d8961714499c0bf769aad4040be2692b57207310a0f2caed28f84fb3dcb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857173762077190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd30-829846b13661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aabb2691f20becaa4c5ca1e67cb43dae0f6e44f71b6f5b06ff5407e565c8f6,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_CREATED,CreatedAt:1726857173824481004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernete
s.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d973e006738759e76ecbf904a95ae2da1a8d48bc4493938303f4bb468347faa,PodSandboxId:91c2c951af446cc6dda00b7c57a130a5c59242b86af430fce1d3ffb89740c84a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857173662917592,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-02987
2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d18b934b362ca3e58d24504c3767c80803b43675fff12590d1d101aadcc6f72,PodSandboxId:c2899b18f212e4f8b6143d021264fb20d3ff5911135ecbc13b258c0088a3b1d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726856850078067054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,
io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3897ce884a96e9c37e2ae2f7bed67590af6954a8e687b9ba88a01d22e29a3246,PodSandboxId:5330b65b85bb0874ab88295defaec3a4a81adb99453b8e972d310a90ec5010ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856791102386218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b35d8551f8a920c627d5b0364473c8d5bc743cf417dce9937c0769445051c907,PodSandboxId:4a27ce4bddd70fb49ec3a10e1ea41fd6a7bbbe161b7bb308f44d8d56ea71e7e6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726856779334068679,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2870a2ffc4d8472bf57d2be157ab5bee83a8ad11be43f8950b4964e68b24dc82,PodSandboxId:cae525d4892a631162052f00172d6fd1fa1c0432a1fdb190dd825d319eac6b92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726856779204146465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd3
0-829846b13661,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073a54c38674e8692d0308e45f3e05cc9179f3d5b85e00aa5d42bd126e0196ed,PodSandboxId:8439213e80207e430fcf31af3ff01c520bc776a8a996e5354212bc4135303289,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726856766988752247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da0460aa0e8574bc96c14fafa16f14,
},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705d3dd400a4ba3de60e31139e64e3ed80bcb3d04c7dac4ca65633badd43a643,PodSandboxId:6432f066bd8e9d8a86557cce08ccb49a49bc98e74cd4ad0c3de750ccecdf00f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726856766881282847,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes
.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1804be53eac55fb70afccfdc2b30a9f18eb4952522eb900016095c20d5772daa,PodSandboxId:1e755d432d60b11649a2bcee899436c37ccdde20c2dac5525d003db4782bbbc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856766906045930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18dc98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash
: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c529d2d1a67659a8727534f484783b19b3e28f5fdea0dc634075c23cb408373a,PodSandboxId:dacb801d8f81c45556bf021618f4f58fa009a057aa261a132ea23089126b13b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726856766886909919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dfc47247-5023-4669-9fc2-63f4ad3b474a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.352781927Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=450e77f8-6bcd-4845-ac76-db275f4d6e9c name=/runtime.v1.RuntimeService/Version
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.352858154Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=450e77f8-6bcd-4845-ac76-db275f4d6e9c name=/runtime.v1.RuntimeService/Version
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.354025172Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b58440f-d0fc-4955-988b-d0ad883bbd2a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.354441388Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857424354377420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b58440f-d0fc-4955-988b-d0ad883bbd2a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.355165590Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cd51d4d-4c46-461c-8ec5-2695743c6f3d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.355222873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cd51d4d-4c46-461c-8ec5-2695743c6f3d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.355658445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1325992f968925a7e0bd8ca36eb91f0be08eda48728e64ed2323ec77bf2f5b0,PodSandboxId:6ca8890cc99c3b272bf592253c07377789135a7a6de89eae7beb4f99f7572118,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726857212468347801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1eeb0e023a3b826ce235ee9394ed39e834b1da0bffac4611d023fe0bd4d655,PodSandboxId:234ae40772716825c09687a0cf35c83a487bd28a84d425ee7262c589f92026f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857178727556714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec4fb8debfebe359a87d116ad45d4dc578db044a0e0d812c62a956fe01ffbb1c,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857178517786604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90f07dfbad75d09ccff8e3c009a0061d0cf92a7d37b500882f4c886b9021a7e,PodSandboxId:6243608fd21a5a40bae600a7f50b44c7ba0ff09300ba771b16cf662acd47d9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857175917382785,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6ed28af14ff16bd8c9770970f355fc2cc818a8f475c5888fc4fdb56c129cb7,PodSandboxId:7790f09b328412d22d5f040b38f6d86121d85423c5db6f64d97f9fa1082a9ae4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857175888286163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da
0460aa0e8574bc96c14fafa16f14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9317bb7bef08ea5c7f685012c221f64907dff56faa882dace075d4e2103bc2d,PodSandboxId:ca278c734be154c51b62847db506962be86313eb947f2a6a3622ef9cd52a0e48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857175866848551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18d
c98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e48e9537db159e4367676ef9f93e6029c5303f946601f6dd01412cc1d4d35a0f,PodSandboxId:c8c9f8c7d4b6e06676c4e73b0b9bcd256ead338891b81cc85ff430e9adde0d70,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726857173916456917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d911e8c81f08e255bbb2b2f2e819d0947975821667db477cfaa24cdad47a8b,PodSandboxId:f56e0d8961714499c0bf769aad4040be2692b57207310a0f2caed28f84fb3dcb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857173762077190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd30-829846b13661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aabb2691f20becaa4c5ca1e67cb43dae0f6e44f71b6f5b06ff5407e565c8f6,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_CREATED,CreatedAt:1726857173824481004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernete
s.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d973e006738759e76ecbf904a95ae2da1a8d48bc4493938303f4bb468347faa,PodSandboxId:91c2c951af446cc6dda00b7c57a130a5c59242b86af430fce1d3ffb89740c84a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857173662917592,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-02987
2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d18b934b362ca3e58d24504c3767c80803b43675fff12590d1d101aadcc6f72,PodSandboxId:c2899b18f212e4f8b6143d021264fb20d3ff5911135ecbc13b258c0088a3b1d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726856850078067054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,
io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3897ce884a96e9c37e2ae2f7bed67590af6954a8e687b9ba88a01d22e29a3246,PodSandboxId:5330b65b85bb0874ab88295defaec3a4a81adb99453b8e972d310a90ec5010ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856791102386218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b35d8551f8a920c627d5b0364473c8d5bc743cf417dce9937c0769445051c907,PodSandboxId:4a27ce4bddd70fb49ec3a10e1ea41fd6a7bbbe161b7bb308f44d8d56ea71e7e6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726856779334068679,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2870a2ffc4d8472bf57d2be157ab5bee83a8ad11be43f8950b4964e68b24dc82,PodSandboxId:cae525d4892a631162052f00172d6fd1fa1c0432a1fdb190dd825d319eac6b92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726856779204146465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd3
0-829846b13661,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073a54c38674e8692d0308e45f3e05cc9179f3d5b85e00aa5d42bd126e0196ed,PodSandboxId:8439213e80207e430fcf31af3ff01c520bc776a8a996e5354212bc4135303289,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726856766988752247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da0460aa0e8574bc96c14fafa16f14,
},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705d3dd400a4ba3de60e31139e64e3ed80bcb3d04c7dac4ca65633badd43a643,PodSandboxId:6432f066bd8e9d8a86557cce08ccb49a49bc98e74cd4ad0c3de750ccecdf00f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726856766881282847,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes
.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1804be53eac55fb70afccfdc2b30a9f18eb4952522eb900016095c20d5772daa,PodSandboxId:1e755d432d60b11649a2bcee899436c37ccdde20c2dac5525d003db4782bbbc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856766906045930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18dc98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash
: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c529d2d1a67659a8727534f484783b19b3e28f5fdea0dc634075c23cb408373a,PodSandboxId:dacb801d8f81c45556bf021618f4f58fa009a057aa261a132ea23089126b13b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726856766886909919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cd51d4d-4c46-461c-8ec5-2695743c6f3d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.393887476Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b52a9d49-7d35-43cf-b61b-36392930fe79 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.393970519Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b52a9d49-7d35-43cf-b61b-36392930fe79 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.394979707Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34876ae7-68a4-4f2a-bb46-a1b3e9713671 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.395371591Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857424395350055,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34876ae7-68a4-4f2a-bb46-a1b3e9713671 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.396275877Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d28e46a2-1fab-4974-a263-08a0156a332a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.396333402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d28e46a2-1fab-4974-a263-08a0156a332a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.396709804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1325992f968925a7e0bd8ca36eb91f0be08eda48728e64ed2323ec77bf2f5b0,PodSandboxId:6ca8890cc99c3b272bf592253c07377789135a7a6de89eae7beb4f99f7572118,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726857212468347801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1eeb0e023a3b826ce235ee9394ed39e834b1da0bffac4611d023fe0bd4d655,PodSandboxId:234ae40772716825c09687a0cf35c83a487bd28a84d425ee7262c589f92026f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857178727556714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec4fb8debfebe359a87d116ad45d4dc578db044a0e0d812c62a956fe01ffbb1c,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857178517786604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90f07dfbad75d09ccff8e3c009a0061d0cf92a7d37b500882f4c886b9021a7e,PodSandboxId:6243608fd21a5a40bae600a7f50b44c7ba0ff09300ba771b16cf662acd47d9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857175917382785,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6ed28af14ff16bd8c9770970f355fc2cc818a8f475c5888fc4fdb56c129cb7,PodSandboxId:7790f09b328412d22d5f040b38f6d86121d85423c5db6f64d97f9fa1082a9ae4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857175888286163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da
0460aa0e8574bc96c14fafa16f14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9317bb7bef08ea5c7f685012c221f64907dff56faa882dace075d4e2103bc2d,PodSandboxId:ca278c734be154c51b62847db506962be86313eb947f2a6a3622ef9cd52a0e48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857175866848551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18d
c98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e48e9537db159e4367676ef9f93e6029c5303f946601f6dd01412cc1d4d35a0f,PodSandboxId:c8c9f8c7d4b6e06676c4e73b0b9bcd256ead338891b81cc85ff430e9adde0d70,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726857173916456917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d911e8c81f08e255bbb2b2f2e819d0947975821667db477cfaa24cdad47a8b,PodSandboxId:f56e0d8961714499c0bf769aad4040be2692b57207310a0f2caed28f84fb3dcb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857173762077190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd30-829846b13661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aabb2691f20becaa4c5ca1e67cb43dae0f6e44f71b6f5b06ff5407e565c8f6,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_CREATED,CreatedAt:1726857173824481004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernete
s.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d973e006738759e76ecbf904a95ae2da1a8d48bc4493938303f4bb468347faa,PodSandboxId:91c2c951af446cc6dda00b7c57a130a5c59242b86af430fce1d3ffb89740c84a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857173662917592,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-02987
2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d18b934b362ca3e58d24504c3767c80803b43675fff12590d1d101aadcc6f72,PodSandboxId:c2899b18f212e4f8b6143d021264fb20d3ff5911135ecbc13b258c0088a3b1d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726856850078067054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,
io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3897ce884a96e9c37e2ae2f7bed67590af6954a8e687b9ba88a01d22e29a3246,PodSandboxId:5330b65b85bb0874ab88295defaec3a4a81adb99453b8e972d310a90ec5010ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856791102386218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b35d8551f8a920c627d5b0364473c8d5bc743cf417dce9937c0769445051c907,PodSandboxId:4a27ce4bddd70fb49ec3a10e1ea41fd6a7bbbe161b7bb308f44d8d56ea71e7e6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726856779334068679,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2870a2ffc4d8472bf57d2be157ab5bee83a8ad11be43f8950b4964e68b24dc82,PodSandboxId:cae525d4892a631162052f00172d6fd1fa1c0432a1fdb190dd825d319eac6b92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726856779204146465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd3
0-829846b13661,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073a54c38674e8692d0308e45f3e05cc9179f3d5b85e00aa5d42bd126e0196ed,PodSandboxId:8439213e80207e430fcf31af3ff01c520bc776a8a996e5354212bc4135303289,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726856766988752247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da0460aa0e8574bc96c14fafa16f14,
},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705d3dd400a4ba3de60e31139e64e3ed80bcb3d04c7dac4ca65633badd43a643,PodSandboxId:6432f066bd8e9d8a86557cce08ccb49a49bc98e74cd4ad0c3de750ccecdf00f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726856766881282847,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes
.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1804be53eac55fb70afccfdc2b30a9f18eb4952522eb900016095c20d5772daa,PodSandboxId:1e755d432d60b11649a2bcee899436c37ccdde20c2dac5525d003db4782bbbc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856766906045930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18dc98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash
: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c529d2d1a67659a8727534f484783b19b3e28f5fdea0dc634075c23cb408373a,PodSandboxId:dacb801d8f81c45556bf021618f4f58fa009a057aa261a132ea23089126b13b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726856766886909919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d28e46a2-1fab-4974-a263-08a0156a332a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.437646968Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=364bc4fd-3220-4114-8ff0-00ded897938b name=/runtime.v1.RuntimeService/Version
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.437723643Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=364bc4fd-3220-4114-8ff0-00ded897938b name=/runtime.v1.RuntimeService/Version
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.438962070Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57d86941-0dc7-4c2c-9d76-c8a1734a91fa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.439345214Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857424439318293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57d86941-0dc7-4c2c-9d76-c8a1734a91fa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.439927136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9407eaf-34c7-4462-8517-d56bf540399a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.439985704Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9407eaf-34c7-4462-8517-d56bf540399a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:37:04 multinode-029872 crio[2704]: time="2024-09-20 18:37:04.440371228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1325992f968925a7e0bd8ca36eb91f0be08eda48728e64ed2323ec77bf2f5b0,PodSandboxId:6ca8890cc99c3b272bf592253c07377789135a7a6de89eae7beb4f99f7572118,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726857212468347801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1eeb0e023a3b826ce235ee9394ed39e834b1da0bffac4611d023fe0bd4d655,PodSandboxId:234ae40772716825c09687a0cf35c83a487bd28a84d425ee7262c589f92026f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857178727556714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec4fb8debfebe359a87d116ad45d4dc578db044a0e0d812c62a956fe01ffbb1c,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857178517786604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90f07dfbad75d09ccff8e3c009a0061d0cf92a7d37b500882f4c886b9021a7e,PodSandboxId:6243608fd21a5a40bae600a7f50b44c7ba0ff09300ba771b16cf662acd47d9ef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857175917382785,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6ed28af14ff16bd8c9770970f355fc2cc818a8f475c5888fc4fdb56c129cb7,PodSandboxId:7790f09b328412d22d5f040b38f6d86121d85423c5db6f64d97f9fa1082a9ae4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857175888286163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da
0460aa0e8574bc96c14fafa16f14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9317bb7bef08ea5c7f685012c221f64907dff56faa882dace075d4e2103bc2d,PodSandboxId:ca278c734be154c51b62847db506962be86313eb947f2a6a3622ef9cd52a0e48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857175866848551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18d
c98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e48e9537db159e4367676ef9f93e6029c5303f946601f6dd01412cc1d4d35a0f,PodSandboxId:c8c9f8c7d4b6e06676c4e73b0b9bcd256ead338891b81cc85ff430e9adde0d70,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726857173916456917,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d911e8c81f08e255bbb2b2f2e819d0947975821667db477cfaa24cdad47a8b,PodSandboxId:f56e0d8961714499c0bf769aad4040be2692b57207310a0f2caed28f84fb3dcb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857173762077190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd30-829846b13661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aabb2691f20becaa4c5ca1e67cb43dae0f6e44f71b6f5b06ff5407e565c8f6,PodSandboxId:99c2381eaa0c1e2364ea6421d198461bcacff22a2c3d1b3f71e5a95feea1b6b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_CREATED,CreatedAt:1726857173824481004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mjk2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d603c3d1-f8f2-46e1-ad57-abb6a019907b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernete
s.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d973e006738759e76ecbf904a95ae2da1a8d48bc4493938303f4bb468347faa,PodSandboxId:91c2c951af446cc6dda00b7c57a130a5c59242b86af430fce1d3ffb89740c84a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857173662917592,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-02987
2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d18b934b362ca3e58d24504c3767c80803b43675fff12590d1d101aadcc6f72,PodSandboxId:c2899b18f212e4f8b6143d021264fb20d3ff5911135ecbc13b258c0088a3b1d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726856850078067054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8vvbm,
io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33451c66-27fc-4c97-aaa4-63e6156009b7,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3897ce884a96e9c37e2ae2f7bed67590af6954a8e687b9ba88a01d22e29a3246,PodSandboxId:5330b65b85bb0874ab88295defaec3a4a81adb99453b8e972d310a90ec5010ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726856791102386218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 8f9ff436-5e0e-43f2-a670-936b8d8bfe78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b35d8551f8a920c627d5b0364473c8d5bc743cf417dce9937c0769445051c907,PodSandboxId:4a27ce4bddd70fb49ec3a10e1ea41fd6a7bbbe161b7bb308f44d8d56ea71e7e6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726856779334068679,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gmkqk,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9e687c4b-71f4-4f25-8ef3-ce1b97ab79b4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2870a2ffc4d8472bf57d2be157ab5bee83a8ad11be43f8950b4964e68b24dc82,PodSandboxId:cae525d4892a631162052f00172d6fd1fa1c0432a1fdb190dd825d319eac6b92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726856779204146465,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5spcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2d81ad-92ae-45cb-bd3
0-829846b13661,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073a54c38674e8692d0308e45f3e05cc9179f3d5b85e00aa5d42bd126e0196ed,PodSandboxId:8439213e80207e430fcf31af3ff01c520bc776a8a996e5354212bc4135303289,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726856766988752247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80da0460aa0e8574bc96c14fafa16f14,
},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705d3dd400a4ba3de60e31139e64e3ed80bcb3d04c7dac4ca65633badd43a643,PodSandboxId:6432f066bd8e9d8a86557cce08ccb49a49bc98e74cd4ad0c3de750ccecdf00f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726856766881282847,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a69aaf129c517f704df3c8fec3c8ea,},Annotations:map[string]string{io.kubernetes
.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1804be53eac55fb70afccfdc2b30a9f18eb4952522eb900016095c20d5772daa,PodSandboxId:1e755d432d60b11649a2bcee899436c37ccdde20c2dac5525d003db4782bbbc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726856766906045930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa568adb73211bd18dc98541f4c15c8b,},Annotations:map[string]string{io.kubernetes.container.hash
: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c529d2d1a67659a8727534f484783b19b3e28f5fdea0dc634075c23cb408373a,PodSandboxId:dacb801d8f81c45556bf021618f4f58fa009a057aa261a132ea23089126b13b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726856766886909919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-029872,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff071bfc2a4b8c94c23470636cf34671,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9407eaf-34c7-4462-8517-d56bf540399a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a1325992f9689       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   6ca8890cc99c3       busybox-7dff88458-8vvbm
	cc1eeb0e023a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   234ae40772716       storage-provisioner
	ec4fb8debfebe       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   2                   99c2381eaa0c1       coredns-7c65d6cfc9-mjk2z
	f90f07dfbad75       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   6243608fd21a5       kube-controller-manager-multinode-029872
	4e6ed28af14ff       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   7790f09b32841       kube-scheduler-multinode-029872
	d9317bb7bef08       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   ca278c734be15       kube-apiserver-multinode-029872
	e48e9537db159       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   c8c9f8c7d4b6e       kindnet-gmkqk
	b7aabb2691f20       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Created             coredns                   1                   99c2381eaa0c1       coredns-7c65d6cfc9-mjk2z
	63d911e8c81f0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   f56e0d8961714       kube-proxy-5spcx
	1d973e0067387       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   91c2c951af446       etcd-multinode-029872
	0d18b934b362c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   c2899b18f212e       busybox-7dff88458-8vvbm
	3897ce884a96e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   5330b65b85bb0       storage-provisioner
	b35d8551f8a92       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   4a27ce4bddd70       kindnet-gmkqk
	2870a2ffc4d84       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   cae525d4892a6       kube-proxy-5spcx
	073a54c38674e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   8439213e80207       kube-scheduler-multinode-029872
	1804be53eac55       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   1e755d432d60b       kube-apiserver-multinode-029872
	c529d2d1a6765       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   dacb801d8f81c       kube-controller-manager-multinode-029872
	705d3dd400a4b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   6432f066bd8e9       etcd-multinode-029872
	
	
	==> coredns [b7aabb2691f20becaa4c5ca1e67cb43dae0f6e44f71b6f5b06ff5407e565c8f6] <==
	
	
	==> coredns [ec4fb8debfebe359a87d116ad45d4dc578db044a0e0d812c62a956fe01ffbb1c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52393 - 61376 "HINFO IN 7918400279444024197.6702250931947430502. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014554351s
	
	
	==> describe nodes <==
	Name:               multinode-029872
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-029872
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=multinode-029872
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_26_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:26:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-029872
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:37:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:32:58 +0000   Fri, 20 Sep 2024 18:26:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:32:58 +0000   Fri, 20 Sep 2024 18:26:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:32:58 +0000   Fri, 20 Sep 2024 18:26:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:32:58 +0000   Fri, 20 Sep 2024 18:26:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.208
	  Hostname:    multinode-029872
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 43d7473b27e74347bacea3bc028d8640
	  System UUID:                43d7473b-27e7-4347-bace-a3bc028d8640
	  Boot ID:                    a5fe9348-cdf2-40c9-ae57-5402902cd3cd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8vvbm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	  kube-system                 coredns-7c65d6cfc9-mjk2z                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-029872                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-gmkqk                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-029872             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-029872    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-5spcx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-029872             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m5s                 kube-proxy       
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-029872 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-029872 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-029872 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-029872 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-029872 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-029872 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-029872 event: Registered Node multinode-029872 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-029872 status is now: NodeReady
	  Normal  Starting                 4m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node multinode-029872 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node multinode-029872 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node multinode-029872 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                 node-controller  Node multinode-029872 event: Registered Node multinode-029872 in Controller
	
	
	Name:               multinode-029872-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-029872-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=multinode-029872
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_33_36_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:33:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-029872-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:34:37 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 18:34:07 +0000   Fri, 20 Sep 2024 18:35:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 18:34:07 +0000   Fri, 20 Sep 2024 18:35:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 18:34:07 +0000   Fri, 20 Sep 2024 18:35:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 18:34:07 +0000   Fri, 20 Sep 2024 18:35:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    multinode-029872-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac160e0a0adc42e8936ae664dffb726f
	  System UUID:                ac160e0a-0adc-42e8-936a-e664dffb726f
	  Boot ID:                    b10aa6c0-f090-4f36-9b4a-bc9b2d3ead0f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cz7gz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 kindnet-8spmr              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-kbppv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m23s                  kube-proxy       
	  Normal  Starting                 9m55s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-029872-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-029872-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-029872-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m40s                  kubelet          Node multinode-029872-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m28s (x2 over 3m28s)  kubelet          Node multinode-029872-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m28s (x2 over 3m28s)  kubelet          Node multinode-029872-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m28s (x2 over 3m28s)  kubelet          Node multinode-029872-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m9s                   kubelet          Node multinode-029872-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-029872-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.065459] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.170159] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.143132] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.268833] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Sep20 18:26] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +3.925117] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.061639] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.999075] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.086942] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.611215] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +0.093715] kauditd_printk_skb: 21 callbacks suppressed
	[ +13.435528] kauditd_printk_skb: 60 callbacks suppressed
	[Sep20 18:27] kauditd_printk_skb: 14 callbacks suppressed
	[Sep20 18:32] systemd-fstab-generator[2627]: Ignoring "noauto" option for root device
	[  +0.151807] systemd-fstab-generator[2639]: Ignoring "noauto" option for root device
	[  +0.195751] systemd-fstab-generator[2653]: Ignoring "noauto" option for root device
	[  +0.158670] systemd-fstab-generator[2665]: Ignoring "noauto" option for root device
	[  +0.277223] systemd-fstab-generator[2694]: Ignoring "noauto" option for root device
	[  +5.206252] systemd-fstab-generator[2788]: Ignoring "noauto" option for root device
	[  +0.082771] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.116753] systemd-fstab-generator[3171]: Ignoring "noauto" option for root device
	[  +3.602813] kauditd_printk_skb: 106 callbacks suppressed
	[Sep20 18:33] systemd-fstab-generator[3792]: Ignoring "noauto" option for root device
	[  +0.093622] kauditd_printk_skb: 11 callbacks suppressed
	[ +20.475894] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [1d973e006738759e76ecbf904a95ae2da1a8d48bc4493938303f4bb468347faa] <==
	{"level":"info","ts":"2024-09-20T18:32:54.020824Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fb8a78b66dce1ac7","local-member-id":"7fe6bf77aaafe0f6","added-peer-id":"7fe6bf77aaafe0f6","added-peer-peer-urls":["https://192.168.39.208:2380"]}
	{"level":"info","ts":"2024-09-20T18:32:54.020936Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fb8a78b66dce1ac7","local-member-id":"7fe6bf77aaafe0f6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:32:54.020963Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:32:54.023781Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:32:54.029741Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T18:32:54.030248Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7fe6bf77aaafe0f6","initial-advertise-peer-urls":["https://192.168.39.208:2380"],"listen-peer-urls":["https://192.168.39.208:2380"],"advertise-client-urls":["https://192.168.39.208:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.208:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T18:32:54.030648Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T18:32:54.030790Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-09-20T18:32:54.030803Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-09-20T18:32:55.798662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T18:32:55.798764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T18:32:55.798821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 received MsgPreVoteResp from 7fe6bf77aaafe0f6 at term 2"}
	{"level":"info","ts":"2024-09-20T18:32:55.798861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T18:32:55.798889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 received MsgVoteResp from 7fe6bf77aaafe0f6 at term 3"}
	{"level":"info","ts":"2024-09-20T18:32:55.798924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T18:32:55.798951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7fe6bf77aaafe0f6 elected leader 7fe6bf77aaafe0f6 at term 3"}
	{"level":"info","ts":"2024-09-20T18:32:55.804087Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7fe6bf77aaafe0f6","local-member-attributes":"{Name:multinode-029872 ClientURLs:[https://192.168.39.208:2379]}","request-path":"/0/members/7fe6bf77aaafe0f6/attributes","cluster-id":"fb8a78b66dce1ac7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:32:55.804390Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:32:55.805491Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:32:55.810631Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.208:2379"}
	{"level":"info","ts":"2024-09-20T18:32:55.810953Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:32:55.814980Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:32:55.806742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:32:55.818508Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:32:55.821686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [705d3dd400a4ba3de60e31139e64e3ed80bcb3d04c7dac4ca65633badd43a643] <==
	{"level":"info","ts":"2024-09-20T18:26:08.027417Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:26:08.029402Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:26:08.029570Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:26:08.029616Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:26:08.030148Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.208:2379"}
	{"level":"info","ts":"2024-09-20T18:26:08.030164Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:26:08.031028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:26:08.030266Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fb8a78b66dce1ac7","local-member-id":"7fe6bf77aaafe0f6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:26:08.033687Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:26:08.033729Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	2024/09/20 18:26:11 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-20T18:27:02.991889Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.568547ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16210304509122378693 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-029872-m02.17f70721892cde7f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-029872-m02.17f70721892cde7f\" value_size:642 lease:6986932472267602220 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-20T18:27:02.991999Z","caller":"traceutil/trace.go:171","msg":"trace[249892085] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"236.006584ms","start":"2024-09-20T18:27:02.755976Z","end":"2024-09-20T18:27:02.991982Z","steps":["trace[249892085] 'process raft request'  (duration: 74.747174ms)","trace[249892085] 'compare'  (duration: 160.473952ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:27:58.849785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.23444ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16210304509122379201 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-029872-m03.17f7072e8bd98e9c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-029872-m03.17f7072e8bd98e9c\" value_size:646 lease:6986932472267603009 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-20T18:27:58.850194Z","caller":"traceutil/trace.go:171","msg":"trace[420030120] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"210.447159ms","start":"2024-09-20T18:27:58.639703Z","end":"2024-09-20T18:27:58.850151Z","steps":["trace[420030120] 'process raft request'  (duration: 79.729945ms)","trace[420030120] 'compare'  (duration: 130.124327ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:31:15.515354Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-20T18:31:15.515499Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-029872","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.208:2380"],"advertise-client-urls":["https://192.168.39.208:2379"]}
	{"level":"warn","ts":"2024-09-20T18:31:15.515688Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:31:15.545751Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:31:15.604626Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.208:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:31:15.604898Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.208:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T18:31:15.605035Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7fe6bf77aaafe0f6","current-leader-member-id":"7fe6bf77aaafe0f6"}
	{"level":"info","ts":"2024-09-20T18:31:15.608384Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-09-20T18:31:15.608631Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-09-20T18:31:15.608666Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-029872","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.208:2380"],"advertise-client-urls":["https://192.168.39.208:2379"]}
	
	
	==> kernel <==
	 18:37:04 up 11 min,  0 users,  load average: 0.35, 0.31, 0.16
	Linux multinode-029872 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b35d8551f8a920c627d5b0364473c8d5bc743cf417dce9937c0769445051c907] <==
	I0920 18:30:30.293380       1 main.go:322] Node multinode-029872-m03 has CIDR [10.244.4.0/24] 
	I0920 18:30:40.288729       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:30:40.288854       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:30:40.289000       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0920 18:30:40.289027       1 main.go:322] Node multinode-029872-m03 has CIDR [10.244.4.0/24] 
	I0920 18:30:40.289098       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:30:40.289118       1 main.go:299] handling current node
	I0920 18:30:50.284637       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:30:50.284686       1 main.go:299] handling current node
	I0920 18:30:50.284710       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:30:50.284715       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:30:50.284839       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0920 18:30:50.284861       1 main.go:322] Node multinode-029872-m03 has CIDR [10.244.4.0/24] 
	I0920 18:31:00.292654       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:31:00.292696       1 main.go:299] handling current node
	I0920 18:31:00.292712       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:31:00.292718       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:31:00.292861       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0920 18:31:00.292880       1 main.go:322] Node multinode-029872-m03 has CIDR [10.244.4.0/24] 
	I0920 18:31:10.288691       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:31:10.288835       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:31:10.288988       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0920 18:31:10.289029       1 main.go:322] Node multinode-029872-m03 has CIDR [10.244.4.0/24] 
	I0920 18:31:10.289102       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:31:10.289122       1 main.go:299] handling current node
	
	
	==> kindnet [e48e9537db159e4367676ef9f93e6029c5303f946601f6dd01412cc1d4d35a0f] <==
	I0920 18:35:59.103876       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:36:09.111455       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:36:09.111680       1 main.go:299] handling current node
	I0920 18:36:09.111728       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:36:09.111750       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:36:19.111451       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:36:19.111588       1 main.go:299] handling current node
	I0920 18:36:19.111636       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:36:19.111642       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:36:29.104072       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:36:29.104311       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:36:29.104616       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:36:29.104671       1 main.go:299] handling current node
	I0920 18:36:39.103391       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:36:39.103561       1 main.go:299] handling current node
	I0920 18:36:39.103595       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:36:39.103625       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:36:49.112678       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:36:49.112794       1 main.go:299] handling current node
	I0920 18:36:49.112831       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:36:49.112849       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	I0920 18:36:59.103454       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0920 18:36:59.103562       1 main.go:299] handling current node
	I0920 18:36:59.103583       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 18:36:59.103594       1 main.go:322] Node multinode-029872-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [1804be53eac55fb70afccfdc2b30a9f18eb4952522eb900016095c20d5772daa] <==
	W0920 18:31:15.537413       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.537444       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538003       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538074       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538112       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538148       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538199       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538241       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538275       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.538304       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539069       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539102       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539131       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539158       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539191       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539230       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539259       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539288       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539323       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539939       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.539973       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.540000       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.540027       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.540056       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:31:15.540296       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d9317bb7bef08ea5c7f685012c221f64907dff56faa882dace075d4e2103bc2d] <==
	I0920 18:32:58.144442       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 18:32:58.144483       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 18:32:58.149779       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 18:32:58.150593       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 18:32:58.150732       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 18:32:58.151347       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 18:32:58.151401       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:32:58.157025       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 18:32:58.178603       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 18:32:58.180570       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 18:32:58.180739       1 policy_source.go:224] refreshing policies
	I0920 18:32:58.186802       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 18:32:58.189015       1 aggregator.go:171] initial CRD sync complete...
	I0920 18:32:58.189039       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 18:32:58.189045       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 18:32:58.189052       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:32:58.254680       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 18:32:59.052554       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 18:33:00.149499       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 18:33:00.303260       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 18:33:00.323463       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 18:33:00.416666       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 18:33:00.432896       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 18:33:01.661960       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 18:33:01.757195       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c529d2d1a67659a8727534f484783b19b3e28f5fdea0dc634075c23cb408373a] <==
	I0920 18:28:49.085887       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:28:49.086479       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-029872-m02"
	I0920 18:28:50.449141       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-029872-m02"
	I0920 18:28:50.451071       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-029872-m03\" does not exist"
	I0920 18:28:50.460303       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-029872-m03" podCIDRs=["10.244.4.0/24"]
	I0920 18:28:50.460352       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:28:50.460408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:28:50.484279       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:28:50.601506       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:28:50.936857       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:28:51.410479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:29:00.737344       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:29:09.896757       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:29:09.897101       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-029872-m02"
	I0920 18:29:09.909190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:29:11.327750       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:29:46.348016       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m02"
	I0920 18:29:46.348190       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-029872-m03"
	I0920 18:29:46.366669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m02"
	I0920 18:29:46.404269       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.772381ms"
	I0920 18:29:46.404984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.995µs"
	I0920 18:29:51.403418       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:29:51.414022       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m02"
	I0920 18:29:51.416373       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:30:01.495156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	
	
	==> kube-controller-manager [f90f07dfbad75d09ccff8e3c009a0061d0cf92a7d37b500882f4c886b9021a7e] <==
	I0920 18:34:14.956004       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-029872-m02"
	I0920 18:34:14.979615       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-029872-m03" podCIDRs=["10.244.2.0/24"]
	I0920 18:34:14.979660       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:14.979686       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:15.822169       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:16.183920       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:16.631099       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:25.052746       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:34.576645       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-029872-m02"
	I0920 18:34:34.576762       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:34.586218       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:36.586914       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:39.399442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:39.434476       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:39.961067       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m03"
	I0920 18:34:39.961114       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-029872-m02"
	I0920 18:35:21.605476       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m02"
	I0920 18:35:21.631941       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.486126ms"
	I0920 18:35:21.632503       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m02"
	I0920 18:35:21.632552       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.579µs"
	I0920 18:35:26.686966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-029872-m02"
	I0920 18:35:41.537334       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-wtscj"
	I0920 18:35:41.561878       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-wtscj"
	I0920 18:35:41.561915       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mjjpg"
	I0920 18:35:41.590341       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mjjpg"
	
	
	==> kube-proxy [2870a2ffc4d8472bf57d2be157ab5bee83a8ad11be43f8950b4964e68b24dc82] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:26:19.405376       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:26:19.413745       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.208"]
	E0920 18:26:19.414033       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:26:19.443948       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:26:19.444043       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:26:19.444088       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:26:19.446689       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:26:19.446961       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:26:19.447166       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:26:19.448623       1 config.go:199] "Starting service config controller"
	I0920 18:26:19.448807       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:26:19.448884       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:26:19.448920       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:26:19.449440       1 config.go:328] "Starting node config controller"
	I0920 18:26:19.449998       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:26:19.549994       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:26:19.550050       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:26:19.550324       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [63d911e8c81f08e255bbb2b2f2e819d0947975821667db477cfaa24cdad47a8b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:32:58.747986       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:32:58.775688       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.208"]
	E0920 18:32:58.775885       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:32:58.834975       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:32:58.835130       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:32:58.835231       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:32:58.840713       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:32:58.841029       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:32:58.841053       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:32:58.844683       1 config.go:199] "Starting service config controller"
	I0920 18:32:58.844713       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:32:58.844734       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:32:58.844737       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:32:58.845222       1 config.go:328] "Starting node config controller"
	I0920 18:32:58.845248       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:32:58.945218       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:32:58.945293       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:32:58.945306       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [073a54c38674e8692d0308e45f3e05cc9179f3d5b85e00aa5d42bd126e0196ed] <==
	W0920 18:26:10.368192       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:26:10.368888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.371397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:26:10.371470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.413980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:26:10.414368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.442142       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:26:10.442191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.507767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 18:26:10.507894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.572508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 18:26:10.572587       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.653031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 18:26:10.653189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.680026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:26:10.680620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.754269       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 18:26:10.754395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:26:10.849967       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:26:10.850267       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 18:26:13.551492       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:31:15.519985       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0920 18:31:15.520135       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0920 18:31:15.520451       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0920 18:31:15.520904       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4e6ed28af14ff16bd8c9770970f355fc2cc818a8f475c5888fc4fdb56c129cb7] <==
	I0920 18:32:56.949929       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:32:58.121626       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:32:58.121769       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:32:58.121799       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:32:58.121867       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:32:58.164260       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:32:58.164434       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:32:58.167999       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:32:58.168060       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:32:58.168216       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:32:58.168320       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:32:58.269929       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:35:55 multinode-029872 kubelet[3178]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:35:55 multinode-029872 kubelet[3178]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:35:55 multinode-029872 kubelet[3178]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:35:55 multinode-029872 kubelet[3178]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:35:55 multinode-029872 kubelet[3178]: E0920 18:35:55.334208    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857355333708180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:35:55 multinode-029872 kubelet[3178]: E0920 18:35:55.334240    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857355333708180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:36:05 multinode-029872 kubelet[3178]: E0920 18:36:05.336096    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857365335494914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:36:05 multinode-029872 kubelet[3178]: E0920 18:36:05.336377    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857365335494914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:36:15 multinode-029872 kubelet[3178]: E0920 18:36:15.341111    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857375340442795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:36:15 multinode-029872 kubelet[3178]: E0920 18:36:15.341167    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857375340442795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:36:25 multinode-029872 kubelet[3178]: E0920 18:36:25.345745    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857385344265919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:36:25 multinode-029872 kubelet[3178]: E0920 18:36:25.345804    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857385344265919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:36:35 multinode-029872 kubelet[3178]: E0920 18:36:35.348646    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857395348266369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:36:35 multinode-029872 kubelet[3178]: E0920 18:36:35.349406    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857395348266369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:36:45 multinode-029872 kubelet[3178]: E0920 18:36:45.351134    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857405350582846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:36:45 multinode-029872 kubelet[3178]: E0920 18:36:45.351570    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857405350582846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:36:55 multinode-029872 kubelet[3178]: E0920 18:36:55.279577    3178 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:36:55 multinode-029872 kubelet[3178]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:36:55 multinode-029872 kubelet[3178]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:36:55 multinode-029872 kubelet[3178]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:36:55 multinode-029872 kubelet[3178]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:36:55 multinode-029872 kubelet[3178]: E0920 18:36:55.354761    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857415352839443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:36:55 multinode-029872 kubelet[3178]: E0920 18:36:55.354786    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857415352839443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:37:05 multinode-029872 kubelet[3178]: E0920 18:37:05.358683    3178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857425357989953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:37:05 multinode-029872 kubelet[3178]: E0920 18:37:05.358746    3178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857425357989953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:37:04.029302  276383 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19679-237658/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-029872 -n multinode-029872
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-029872 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.73s)

                                                
                                    
x
+
TestPreload (172.53s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-968336 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-968336 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m29.477203958s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-968336 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-968336 image pull gcr.io/k8s-minikube/busybox: (3.462036095s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-968336
E0920 18:42:29.487274  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-968336: (6.599966655s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-968336 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0920 18:43:30.942142  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-968336 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m9.884842041s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-968336 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-09-20 18:43:41.901012188 +0000 UTC m=+4084.561458850
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-968336 -n test-preload-968336
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-968336 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-968336 logs -n 25: (1.067947777s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n multinode-029872 sudo cat                                       | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | /home/docker/cp-test_multinode-029872-m03_multinode-029872.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-029872 cp multinode-029872-m03:/home/docker/cp-test.txt                       | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m02:/home/docker/cp-test_multinode-029872-m03_multinode-029872-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n                                                                 | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | multinode-029872-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-029872 ssh -n multinode-029872-m02 sudo cat                                   | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | /home/docker/cp-test_multinode-029872-m03_multinode-029872-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-029872 node stop m03                                                          | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	| node    | multinode-029872 node start                                                             | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:29 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-029872                                                                | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:29 UTC |                     |
	| stop    | -p multinode-029872                                                                     | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:29 UTC |                     |
	| start   | -p multinode-029872                                                                     | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:31 UTC | 20 Sep 24 18:34 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-029872                                                                | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:34 UTC |                     |
	| node    | multinode-029872 node delete                                                            | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:34 UTC | 20 Sep 24 18:34 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-029872 stop                                                                   | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:34 UTC |                     |
	| start   | -p multinode-029872                                                                     | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:40 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-029872                                                                | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:40 UTC |                     |
	| start   | -p multinode-029872-m02                                                                 | multinode-029872-m02 | jenkins | v1.34.0 | 20 Sep 24 18:40 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-029872-m03                                                                 | multinode-029872-m03 | jenkins | v1.34.0 | 20 Sep 24 18:40 UTC | 20 Sep 24 18:40 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-029872                                                                 | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:40 UTC |                     |
	| delete  | -p multinode-029872-m03                                                                 | multinode-029872-m03 | jenkins | v1.34.0 | 20 Sep 24 18:40 UTC | 20 Sep 24 18:40 UTC |
	| delete  | -p multinode-029872                                                                     | multinode-029872     | jenkins | v1.34.0 | 20 Sep 24 18:40 UTC | 20 Sep 24 18:40 UTC |
	| start   | -p test-preload-968336                                                                  | test-preload-968336  | jenkins | v1.34.0 | 20 Sep 24 18:40 UTC | 20 Sep 24 18:42 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-968336 image pull                                                          | test-preload-968336  | jenkins | v1.34.0 | 20 Sep 24 18:42 UTC | 20 Sep 24 18:42 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-968336                                                                  | test-preload-968336  | jenkins | v1.34.0 | 20 Sep 24 18:42 UTC | 20 Sep 24 18:42 UTC |
	| start   | -p test-preload-968336                                                                  | test-preload-968336  | jenkins | v1.34.0 | 20 Sep 24 18:42 UTC | 20 Sep 24 18:43 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-968336 image list                                                          | test-preload-968336  | jenkins | v1.34.0 | 20 Sep 24 18:43 UTC | 20 Sep 24 18:43 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:42:31
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:42:31.842791  278711 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:42:31.843016  278711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:42:31.843024  278711 out.go:358] Setting ErrFile to fd 2...
	I0920 18:42:31.843028  278711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:42:31.843192  278711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:42:31.843777  278711 out.go:352] Setting JSON to false
	I0920 18:42:31.844698  278711 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8695,"bootTime":1726849057,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:42:31.844797  278711 start.go:139] virtualization: kvm guest
	I0920 18:42:31.847266  278711 out.go:177] * [test-preload-968336] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:42:31.848993  278711 notify.go:220] Checking for updates...
	I0920 18:42:31.849020  278711 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:42:31.850577  278711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:42:31.852220  278711 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:42:31.853746  278711 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:42:31.855445  278711 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:42:31.856818  278711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:42:31.858784  278711 config.go:182] Loaded profile config "test-preload-968336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0920 18:42:31.859446  278711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:42:31.859509  278711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:42:31.874637  278711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0920 18:42:31.875232  278711 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:42:31.875811  278711 main.go:141] libmachine: Using API Version  1
	I0920 18:42:31.875836  278711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:42:31.876195  278711 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:42:31.876397  278711 main.go:141] libmachine: (test-preload-968336) Calling .DriverName
	I0920 18:42:31.878423  278711 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 18:42:31.879804  278711 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:42:31.880279  278711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:42:31.880329  278711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:42:31.896122  278711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35121
	I0920 18:42:31.896531  278711 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:42:31.897044  278711 main.go:141] libmachine: Using API Version  1
	I0920 18:42:31.897065  278711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:42:31.897420  278711 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:42:31.897611  278711 main.go:141] libmachine: (test-preload-968336) Calling .DriverName
	I0920 18:42:31.934995  278711 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:42:31.936397  278711 start.go:297] selected driver: kvm2
	I0920 18:42:31.936412  278711 start.go:901] validating driver "kvm2" against &{Name:test-preload-968336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-968336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:42:31.936520  278711 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:42:31.937266  278711 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:42:31.937355  278711 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:42:31.953544  278711 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:42:31.953880  278711 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:42:31.953924  278711 cni.go:84] Creating CNI manager for ""
	I0920 18:42:31.953965  278711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:42:31.954041  278711 start.go:340] cluster config:
	{Name:test-preload-968336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-968336 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:42:31.954146  278711 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:42:31.956095  278711 out.go:177] * Starting "test-preload-968336" primary control-plane node in "test-preload-968336" cluster
	I0920 18:42:31.957410  278711 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0920 18:42:32.392053  278711 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0920 18:42:32.392105  278711 cache.go:56] Caching tarball of preloaded images
	I0920 18:42:32.392284  278711 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0920 18:42:32.394346  278711 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0920 18:42:32.395612  278711 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0920 18:42:32.502020  278711 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0920 18:42:43.315880  278711 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0920 18:42:43.315990  278711 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0920 18:42:44.291059  278711 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0920 18:42:44.291198  278711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/test-preload-968336/config.json ...
	I0920 18:42:44.291439  278711 start.go:360] acquireMachinesLock for test-preload-968336: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:42:44.291510  278711 start.go:364] duration metric: took 45.963µs to acquireMachinesLock for "test-preload-968336"
	I0920 18:42:44.291523  278711 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:42:44.291529  278711 fix.go:54] fixHost starting: 
	I0920 18:42:44.291817  278711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:42:44.291854  278711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:42:44.307111  278711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40605
	I0920 18:42:44.307579  278711 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:42:44.308150  278711 main.go:141] libmachine: Using API Version  1
	I0920 18:42:44.308179  278711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:42:44.308556  278711 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:42:44.308800  278711 main.go:141] libmachine: (test-preload-968336) Calling .DriverName
	I0920 18:42:44.309015  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetState
	I0920 18:42:44.310907  278711 fix.go:112] recreateIfNeeded on test-preload-968336: state=Stopped err=<nil>
	I0920 18:42:44.310931  278711 main.go:141] libmachine: (test-preload-968336) Calling .DriverName
	W0920 18:42:44.311104  278711 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:42:44.313318  278711 out.go:177] * Restarting existing kvm2 VM for "test-preload-968336" ...
	I0920 18:42:44.315159  278711 main.go:141] libmachine: (test-preload-968336) Calling .Start
	I0920 18:42:44.315373  278711 main.go:141] libmachine: (test-preload-968336) Ensuring networks are active...
	I0920 18:42:44.316232  278711 main.go:141] libmachine: (test-preload-968336) Ensuring network default is active
	I0920 18:42:44.316631  278711 main.go:141] libmachine: (test-preload-968336) Ensuring network mk-test-preload-968336 is active
	I0920 18:42:44.317004  278711 main.go:141] libmachine: (test-preload-968336) Getting domain xml...
	I0920 18:42:44.317762  278711 main.go:141] libmachine: (test-preload-968336) Creating domain...
	I0920 18:42:45.549846  278711 main.go:141] libmachine: (test-preload-968336) Waiting to get IP...
	I0920 18:42:45.551016  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:42:45.551480  278711 main.go:141] libmachine: (test-preload-968336) DBG | unable to find current IP address of domain test-preload-968336 in network mk-test-preload-968336
	I0920 18:42:45.551578  278711 main.go:141] libmachine: (test-preload-968336) DBG | I0920 18:42:45.551466  278794 retry.go:31] will retry after 221.663815ms: waiting for machine to come up
	I0920 18:42:45.775360  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:42:45.775861  278711 main.go:141] libmachine: (test-preload-968336) DBG | unable to find current IP address of domain test-preload-968336 in network mk-test-preload-968336
	I0920 18:42:45.775879  278711 main.go:141] libmachine: (test-preload-968336) DBG | I0920 18:42:45.775812  278794 retry.go:31] will retry after 335.174781ms: waiting for machine to come up
	I0920 18:42:46.112531  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:42:46.113107  278711 main.go:141] libmachine: (test-preload-968336) DBG | unable to find current IP address of domain test-preload-968336 in network mk-test-preload-968336
	I0920 18:42:46.113133  278711 main.go:141] libmachine: (test-preload-968336) DBG | I0920 18:42:46.113052  278794 retry.go:31] will retry after 425.093719ms: waiting for machine to come up
	I0920 18:42:46.539742  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:42:46.540212  278711 main.go:141] libmachine: (test-preload-968336) DBG | unable to find current IP address of domain test-preload-968336 in network mk-test-preload-968336
	I0920 18:42:46.540242  278711 main.go:141] libmachine: (test-preload-968336) DBG | I0920 18:42:46.540166  278794 retry.go:31] will retry after 369.048931ms: waiting for machine to come up
	I0920 18:42:46.910702  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:42:46.911218  278711 main.go:141] libmachine: (test-preload-968336) DBG | unable to find current IP address of domain test-preload-968336 in network mk-test-preload-968336
	I0920 18:42:46.911254  278711 main.go:141] libmachine: (test-preload-968336) DBG | I0920 18:42:46.911164  278794 retry.go:31] will retry after 578.622613ms: waiting for machine to come up
	I0920 18:42:47.491094  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:42:47.491643  278711 main.go:141] libmachine: (test-preload-968336) DBG | unable to find current IP address of domain test-preload-968336 in network mk-test-preload-968336
	I0920 18:42:47.491669  278711 main.go:141] libmachine: (test-preload-968336) DBG | I0920 18:42:47.491545  278794 retry.go:31] will retry after 668.660929ms: waiting for machine to come up
	I0920 18:42:48.161503  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:42:48.162016  278711 main.go:141] libmachine: (test-preload-968336) DBG | unable to find current IP address of domain test-preload-968336 in network mk-test-preload-968336
	I0920 18:42:48.162040  278711 main.go:141] libmachine: (test-preload-968336) DBG | I0920 18:42:48.161962  278794 retry.go:31] will retry after 1.000027079s: waiting for machine to come up
	I0920 18:42:49.163129  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:42:49.163703  278711 main.go:141] libmachine: (test-preload-968336) DBG | unable to find current IP address of domain test-preload-968336 in network mk-test-preload-968336
	I0920 18:42:49.163718  278711 main.go:141] libmachine: (test-preload-968336) DBG | I0920 18:42:49.163660  278794 retry.go:31] will retry after 1.274697058s: waiting for machine to come up
	I0920 18:42:50.439849  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:42:50.440380  278711 main.go:141] libmachine: (test-preload-968336) DBG | unable to find current IP address of domain test-preload-968336 in network mk-test-preload-968336
	I0920 18:42:50.440410  278711 main.go:141] libmachine: (test-preload-968336) DBG | I0920 18:42:50.440334  278794 retry.go:31] will retry after 1.513793623s: waiting for machine to come up
	I0920 18:42:51.955374  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:42:51.955938  278711 main.go:141] libmachine: (test-preload-968336) DBG | unable to find current IP address of domain test-preload-968336 in network mk-test-preload-968336
	I0920 18:42:51.955965  278711 main.go:141] libmachine: (test-preload-968336) DBG | I0920 18:42:51.955896  278794 retry.go:31] will retry after 1.998933035s: waiting for machine to come up
	I0920 18:42:53.957448  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:42:53.957999  278711 main.go:141] libmachine: (test-preload-968336) DBG | unable to find current IP address of domain test-preload-968336 in network mk-test-preload-968336
	I0920 18:42:53.958040  278711 main.go:141] libmachine: (test-preload-968336) DBG | I0920 18:42:53.957923  278794 retry.go:31] will retry after 2.396867757s: waiting for machine to come up
	I0920 18:42:56.357991  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:42:56.358421  278711 main.go:141] libmachine: (test-preload-968336) DBG | unable to find current IP address of domain test-preload-968336 in network mk-test-preload-968336
	I0920 18:42:56.358447  278711 main.go:141] libmachine: (test-preload-968336) DBG | I0920 18:42:56.358373  278794 retry.go:31] will retry after 3.458869572s: waiting for machine to come up
	I0920 18:42:59.818596  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:42:59.819003  278711 main.go:141] libmachine: (test-preload-968336) DBG | unable to find current IP address of domain test-preload-968336 in network mk-test-preload-968336
	I0920 18:42:59.819031  278711 main.go:141] libmachine: (test-preload-968336) DBG | I0920 18:42:59.818961  278794 retry.go:31] will retry after 3.075420268s: waiting for machine to come up
	I0920 18:43:02.898376  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:02.898873  278711 main.go:141] libmachine: (test-preload-968336) Found IP for machine: 192.168.39.205
	I0920 18:43:02.898908  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has current primary IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:02.898917  278711 main.go:141] libmachine: (test-preload-968336) Reserving static IP address...
	I0920 18:43:02.899337  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "test-preload-968336", mac: "52:54:00:cc:08:c1", ip: "192.168.39.205"} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:02.899362  278711 main.go:141] libmachine: (test-preload-968336) DBG | skip adding static IP to network mk-test-preload-968336 - found existing host DHCP lease matching {name: "test-preload-968336", mac: "52:54:00:cc:08:c1", ip: "192.168.39.205"}
	I0920 18:43:02.899374  278711 main.go:141] libmachine: (test-preload-968336) Reserved static IP address: 192.168.39.205
	I0920 18:43:02.899388  278711 main.go:141] libmachine: (test-preload-968336) Waiting for SSH to be available...
	I0920 18:43:02.899404  278711 main.go:141] libmachine: (test-preload-968336) DBG | Getting to WaitForSSH function...
	I0920 18:43:02.901318  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:02.901606  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:02.901641  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:02.901789  278711 main.go:141] libmachine: (test-preload-968336) DBG | Using SSH client type: external
	I0920 18:43:02.901827  278711 main.go:141] libmachine: (test-preload-968336) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/test-preload-968336/id_rsa (-rw-------)
	I0920 18:43:02.901858  278711 main.go:141] libmachine: (test-preload-968336) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/test-preload-968336/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:43:02.901872  278711 main.go:141] libmachine: (test-preload-968336) DBG | About to run SSH command:
	I0920 18:43:02.901884  278711 main.go:141] libmachine: (test-preload-968336) DBG | exit 0
	I0920 18:43:03.029859  278711 main.go:141] libmachine: (test-preload-968336) DBG | SSH cmd err, output: <nil>: 
	I0920 18:43:03.030275  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetConfigRaw
	I0920 18:43:03.030967  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetIP
	I0920 18:43:03.033540  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.033976  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:03.034006  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.034220  278711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/test-preload-968336/config.json ...
	I0920 18:43:03.034446  278711 machine.go:93] provisionDockerMachine start ...
	I0920 18:43:03.034467  278711 main.go:141] libmachine: (test-preload-968336) Calling .DriverName
	I0920 18:43:03.034703  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHHostname
	I0920 18:43:03.037184  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.037497  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:03.037526  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.037681  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHPort
	I0920 18:43:03.037853  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:03.038007  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:03.038102  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHUsername
	I0920 18:43:03.038305  278711 main.go:141] libmachine: Using SSH client type: native
	I0920 18:43:03.038606  278711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0920 18:43:03.038620  278711 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:43:03.150403  278711 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 18:43:03.150434  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetMachineName
	I0920 18:43:03.150696  278711 buildroot.go:166] provisioning hostname "test-preload-968336"
	I0920 18:43:03.150733  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetMachineName
	I0920 18:43:03.150934  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHHostname
	I0920 18:43:03.153489  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.153970  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:03.154001  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.154181  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHPort
	I0920 18:43:03.154375  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:03.154541  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:03.154721  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHUsername
	I0920 18:43:03.154940  278711 main.go:141] libmachine: Using SSH client type: native
	I0920 18:43:03.155101  278711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0920 18:43:03.155112  278711 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-968336 && echo "test-preload-968336" | sudo tee /etc/hostname
	I0920 18:43:03.280334  278711 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-968336
	
	I0920 18:43:03.280376  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHHostname
	I0920 18:43:03.283373  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.283729  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:03.283763  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.283907  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHPort
	I0920 18:43:03.284124  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:03.284277  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:03.284403  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHUsername
	I0920 18:43:03.284661  278711 main.go:141] libmachine: Using SSH client type: native
	I0920 18:43:03.284836  278711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0920 18:43:03.284851  278711 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-968336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-968336/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-968336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:43:03.403195  278711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:43:03.403231  278711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 18:43:03.403265  278711 buildroot.go:174] setting up certificates
	I0920 18:43:03.403276  278711 provision.go:84] configureAuth start
	I0920 18:43:03.403287  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetMachineName
	I0920 18:43:03.403575  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetIP
	I0920 18:43:03.406308  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.406741  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:03.406775  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.406940  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHHostname
	I0920 18:43:03.409255  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.409531  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:03.409562  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.409716  278711 provision.go:143] copyHostCerts
	I0920 18:43:03.409783  278711 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 18:43:03.409810  278711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:43:03.409922  278711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 18:43:03.410038  278711 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 18:43:03.410049  278711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:43:03.410090  278711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 18:43:03.410170  278711 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 18:43:03.410180  278711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:43:03.410213  278711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 18:43:03.410287  278711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.test-preload-968336 san=[127.0.0.1 192.168.39.205 localhost minikube test-preload-968336]
	I0920 18:43:03.523456  278711 provision.go:177] copyRemoteCerts
	I0920 18:43:03.523547  278711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:43:03.523587  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHHostname
	I0920 18:43:03.526552  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.526891  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:03.526926  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.527140  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHPort
	I0920 18:43:03.527372  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:03.527536  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHUsername
	I0920 18:43:03.527696  278711 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/test-preload-968336/id_rsa Username:docker}
	I0920 18:43:03.611889  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:43:03.635903  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 18:43:03.658868  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:43:03.682800  278711 provision.go:87] duration metric: took 279.50557ms to configureAuth
	I0920 18:43:03.682830  278711 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:43:03.682997  278711 config.go:182] Loaded profile config "test-preload-968336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0920 18:43:03.683070  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHHostname
	I0920 18:43:03.685953  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.686392  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:03.686427  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.686567  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHPort
	I0920 18:43:03.686802  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:03.686970  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:03.687107  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHUsername
	I0920 18:43:03.687283  278711 main.go:141] libmachine: Using SSH client type: native
	I0920 18:43:03.687535  278711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0920 18:43:03.687557  278711 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:43:03.930654  278711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:43:03.930688  278711 machine.go:96] duration metric: took 896.228652ms to provisionDockerMachine
	I0920 18:43:03.930699  278711 start.go:293] postStartSetup for "test-preload-968336" (driver="kvm2")
	I0920 18:43:03.930710  278711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:43:03.930726  278711 main.go:141] libmachine: (test-preload-968336) Calling .DriverName
	I0920 18:43:03.931155  278711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:43:03.931189  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHHostname
	I0920 18:43:03.933925  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.934401  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:03.934432  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:03.934566  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHPort
	I0920 18:43:03.934760  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:03.934882  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHUsername
	I0920 18:43:03.934982  278711 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/test-preload-968336/id_rsa Username:docker}
	I0920 18:43:04.021024  278711 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:43:04.025169  278711 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:43:04.025210  278711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 18:43:04.025285  278711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 18:43:04.025380  278711 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 18:43:04.025513  278711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:43:04.034776  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:43:04.060212  278711 start.go:296] duration metric: took 129.498228ms for postStartSetup
	I0920 18:43:04.060266  278711 fix.go:56] duration metric: took 19.768735209s for fixHost
	I0920 18:43:04.060292  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHHostname
	I0920 18:43:04.063343  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:04.063695  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:04.063733  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:04.063884  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHPort
	I0920 18:43:04.064122  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:04.064292  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:04.064467  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHUsername
	I0920 18:43:04.064672  278711 main.go:141] libmachine: Using SSH client type: native
	I0920 18:43:04.064905  278711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0920 18:43:04.064918  278711 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:43:04.174655  278711 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726857784.151577982
	
	I0920 18:43:04.174679  278711 fix.go:216] guest clock: 1726857784.151577982
	I0920 18:43:04.174686  278711 fix.go:229] Guest: 2024-09-20 18:43:04.151577982 +0000 UTC Remote: 2024-09-20 18:43:04.060271535 +0000 UTC m=+32.253273178 (delta=91.306447ms)
	I0920 18:43:04.174714  278711 fix.go:200] guest clock delta is within tolerance: 91.306447ms
	I0920 18:43:04.174721  278711 start.go:83] releasing machines lock for "test-preload-968336", held for 19.883202864s
	I0920 18:43:04.174744  278711 main.go:141] libmachine: (test-preload-968336) Calling .DriverName
	I0920 18:43:04.175028  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetIP
	I0920 18:43:04.177781  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:04.178220  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:04.178253  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:04.178369  278711 main.go:141] libmachine: (test-preload-968336) Calling .DriverName
	I0920 18:43:04.178935  278711 main.go:141] libmachine: (test-preload-968336) Calling .DriverName
	I0920 18:43:04.179137  278711 main.go:141] libmachine: (test-preload-968336) Calling .DriverName
	I0920 18:43:04.179229  278711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:43:04.179280  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHHostname
	I0920 18:43:04.179387  278711 ssh_runner.go:195] Run: cat /version.json
	I0920 18:43:04.179413  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHHostname
	I0920 18:43:04.182133  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:04.182411  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:04.182547  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:04.182576  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:04.182723  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:04.182741  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:04.182752  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHPort
	I0920 18:43:04.182952  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:04.182959  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHPort
	I0920 18:43:04.183168  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:04.183174  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHUsername
	I0920 18:43:04.183331  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHUsername
	I0920 18:43:04.183342  278711 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/test-preload-968336/id_rsa Username:docker}
	I0920 18:43:04.183440  278711 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/test-preload-968336/id_rsa Username:docker}
	I0920 18:43:04.301668  278711 ssh_runner.go:195] Run: systemctl --version
	I0920 18:43:04.307778  278711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:43:04.457680  278711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:43:04.464459  278711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:43:04.464552  278711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:43:04.482053  278711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:43:04.482088  278711 start.go:495] detecting cgroup driver to use...
	I0920 18:43:04.482164  278711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:43:04.499749  278711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:43:04.514663  278711 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:43:04.514725  278711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:43:04.529056  278711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:43:04.543473  278711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:43:04.662730  278711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:43:04.816193  278711 docker.go:233] disabling docker service ...
	I0920 18:43:04.816287  278711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:43:04.830939  278711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:43:04.844528  278711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:43:04.956581  278711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:43:05.071362  278711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:43:05.085886  278711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:43:05.104379  278711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0920 18:43:05.104450  278711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:43:05.114771  278711 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:43:05.114856  278711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:43:05.125241  278711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:43:05.135983  278711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:43:05.146894  278711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:43:05.157363  278711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:43:05.168038  278711 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:43:05.184645  278711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:43:05.197817  278711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:43:05.207831  278711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:43:05.207912  278711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:43:05.220566  278711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:43:05.230158  278711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:43:05.344190  278711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:43:05.433861  278711 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:43:05.434028  278711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:43:05.439764  278711 start.go:563] Will wait 60s for crictl version
	I0920 18:43:05.439830  278711 ssh_runner.go:195] Run: which crictl
	I0920 18:43:05.443522  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:43:05.478434  278711 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:43:05.478541  278711 ssh_runner.go:195] Run: crio --version
	I0920 18:43:05.510723  278711 ssh_runner.go:195] Run: crio --version
	I0920 18:43:05.539963  278711 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0920 18:43:05.541447  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetIP
	I0920 18:43:05.544090  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:05.544493  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:05.544521  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:05.544745  278711 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:43:05.548818  278711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:43:05.561099  278711 kubeadm.go:883] updating cluster {Name:test-preload-968336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-968336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:43:05.561213  278711 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0920 18:43:05.561268  278711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:43:05.601273  278711 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0920 18:43:05.601340  278711 ssh_runner.go:195] Run: which lz4
	I0920 18:43:05.605180  278711 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:43:05.609266  278711 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:43:05.609300  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0920 18:43:07.061079  278711 crio.go:462] duration metric: took 1.455925585s to copy over tarball
	I0920 18:43:07.061157  278711 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:43:09.453273  278711 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.392083016s)
	I0920 18:43:09.453301  278711 crio.go:469] duration metric: took 2.392192527s to extract the tarball
	I0920 18:43:09.453309  278711 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:43:09.494233  278711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:43:09.537587  278711 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0920 18:43:09.537612  278711 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:43:09.537672  278711 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:43:09.537697  278711 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0920 18:43:09.537714  278711 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 18:43:09.537705  278711 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0920 18:43:09.537742  278711 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0920 18:43:09.537800  278711 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0920 18:43:09.537810  278711 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 18:43:09.537814  278711 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0920 18:43:09.539231  278711 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0920 18:43:09.539231  278711 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0920 18:43:09.539231  278711 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 18:43:09.539232  278711 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0920 18:43:09.539232  278711 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:43:09.539288  278711 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 18:43:09.539233  278711 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0920 18:43:09.539232  278711 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0920 18:43:09.764972  278711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0920 18:43:09.807620  278711 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0920 18:43:09.807667  278711 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0920 18:43:09.807727  278711 ssh_runner.go:195] Run: which crictl
	I0920 18:43:09.811771  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0920 18:43:09.821830  278711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0920 18:43:09.842933  278711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0920 18:43:09.847005  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0920 18:43:09.850157  278711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0920 18:43:09.852211  278711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0920 18:43:09.854556  278711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0920 18:43:09.856055  278711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 18:43:09.950517  278711 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0920 18:43:09.950559  278711 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 18:43:09.950590  278711 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0920 18:43:09.950609  278711 ssh_runner.go:195] Run: which crictl
	I0920 18:43:09.950638  278711 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0920 18:43:09.950696  278711 ssh_runner.go:195] Run: which crictl
	I0920 18:43:10.009435  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0920 18:43:10.016081  278711 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0920 18:43:10.016102  278711 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0920 18:43:10.016126  278711 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0920 18:43:10.016138  278711 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0920 18:43:10.016167  278711 ssh_runner.go:195] Run: which crictl
	I0920 18:43:10.016181  278711 ssh_runner.go:195] Run: which crictl
	I0920 18:43:10.022215  278711 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0920 18:43:10.022237  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0920 18:43:10.022262  278711 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0920 18:43:10.022264  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0920 18:43:10.022294  278711 ssh_runner.go:195] Run: which crictl
	I0920 18:43:10.022225  278711 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0920 18:43:10.022338  278711 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 18:43:10.022377  278711 ssh_runner.go:195] Run: which crictl
	I0920 18:43:10.095962  278711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0920 18:43:10.096016  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0920 18:43:10.096038  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0920 18:43:10.096016  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0920 18:43:10.096069  278711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0920 18:43:10.101666  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0920 18:43:10.101697  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 18:43:10.101667  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0920 18:43:10.186642  278711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0920 18:43:10.186681  278711 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0920 18:43:10.186733  278711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0920 18:43:10.234638  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0920 18:43:10.234625  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0920 18:43:10.234656  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0920 18:43:10.234666  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0920 18:43:10.234702  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0920 18:43:10.254901  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 18:43:10.729100  278711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:43:12.969989  278711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.783224879s)
	I0920 18:43:12.970035  278711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0920 18:43:12.970051  278711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (2.735326463s)
	I0920 18:43:12.970067  278711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7: (2.735402089s)
	I0920 18:43:12.970088  278711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (2.735334718s)
	I0920 18:43:12.970128  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0920 18:43:12.970132  278711 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.241003991s)
	I0920 18:43:12.970160  278711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0920 18:43:12.970129  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0920 18:43:12.970091  278711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (2.735363042s)
	I0920 18:43:12.970116  278711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (2.735402607s)
	I0920 18:43:12.970239  278711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0920 18:43:12.970131  278711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (2.715192064s)
	I0920 18:43:12.970261  278711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0920 18:43:12.970265  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0920 18:43:12.970281  278711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 18:43:12.970318  278711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0920 18:43:13.041800  278711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0920 18:43:13.041854  278711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0920 18:43:13.041922  278711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0920 18:43:13.041940  278711 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0920 18:43:13.041953  278711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0920 18:43:13.041976  278711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0920 18:43:13.041982  278711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0920 18:43:13.062210  278711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0920 18:43:13.062216  278711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0920 18:43:13.062278  278711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0920 18:43:13.062329  278711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0920 18:43:13.062329  278711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0920 18:43:13.065946  278711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0920 18:43:13.495541  278711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0920 18:43:13.495602  278711 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0920 18:43:13.495555  278711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0920 18:43:13.495620  278711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0920 18:43:13.495650  278711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0920 18:43:13.495662  278711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0920 18:43:13.938062  278711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0920 18:43:13.938122  278711 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0920 18:43:13.938183  278711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0920 18:43:16.199805  278711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.261591354s)
	I0920 18:43:16.199863  278711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0920 18:43:16.199900  278711 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0920 18:43:16.200017  278711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0920 18:43:16.346068  278711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0920 18:43:16.346115  278711 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0920 18:43:16.346174  278711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0920 18:43:17.091939  278711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0920 18:43:17.091995  278711 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0920 18:43:17.092054  278711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0920 18:43:17.836338  278711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0920 18:43:17.836404  278711 cache_images.go:123] Successfully loaded all cached images
	I0920 18:43:17.836413  278711 cache_images.go:92] duration metric: took 8.29878712s to LoadCachedImages
	I0920 18:43:17.836429  278711 kubeadm.go:934] updating node { 192.168.39.205 8443 v1.24.4 crio true true} ...
	I0920 18:43:17.836551  278711 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-968336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-968336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:43:17.836625  278711 ssh_runner.go:195] Run: crio config
	I0920 18:43:17.885236  278711 cni.go:84] Creating CNI manager for ""
	I0920 18:43:17.885263  278711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:43:17.885274  278711 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:43:17.885295  278711 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-968336 NodeName:test-preload-968336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:43:17.885420  278711 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-968336"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:43:17.885501  278711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0920 18:43:17.895604  278711 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:43:17.895691  278711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:43:17.905103  278711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0920 18:43:17.921270  278711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:43:17.937560  278711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0920 18:43:17.955777  278711 ssh_runner.go:195] Run: grep 192.168.39.205	control-plane.minikube.internal$ /etc/hosts
	I0920 18:43:17.960179  278711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:43:17.976637  278711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:43:18.122925  278711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:43:18.140612  278711 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/test-preload-968336 for IP: 192.168.39.205
	I0920 18:43:18.140639  278711 certs.go:194] generating shared ca certs ...
	I0920 18:43:18.140660  278711 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:43:18.140836  278711 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 18:43:18.140918  278711 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 18:43:18.140933  278711 certs.go:256] generating profile certs ...
	I0920 18:43:18.141031  278711 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/test-preload-968336/client.key
	I0920 18:43:18.141123  278711 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/test-preload-968336/apiserver.key.a84897b7
	I0920 18:43:18.141174  278711 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/test-preload-968336/proxy-client.key
	I0920 18:43:18.141310  278711 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 18:43:18.141351  278711 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 18:43:18.141374  278711 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:43:18.141406  278711 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:43:18.141438  278711 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:43:18.141471  278711 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 18:43:18.141525  278711 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:43:18.142208  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:43:18.168708  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:43:18.195255  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:43:18.229203  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:43:18.258311  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/test-preload-968336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 18:43:18.281840  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/test-preload-968336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:43:18.305979  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/test-preload-968336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:43:18.330248  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/test-preload-968336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:43:18.363192  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:43:18.386850  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 18:43:18.410301  278711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 18:43:18.434727  278711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:43:18.451674  278711 ssh_runner.go:195] Run: openssl version
	I0920 18:43:18.457457  278711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:43:18.468495  278711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:43:18.472950  278711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:43:18.473028  278711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:43:18.478659  278711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:43:18.489378  278711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 18:43:18.501154  278711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 18:43:18.505735  278711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:43:18.505808  278711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 18:43:18.511547  278711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 18:43:18.523713  278711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 18:43:18.536050  278711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 18:43:18.540625  278711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:43:18.540700  278711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 18:43:18.546816  278711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:43:18.559452  278711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:43:18.564457  278711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:43:18.570815  278711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:43:18.577066  278711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:43:18.583325  278711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:43:18.589528  278711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:43:18.595556  278711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:43:18.601779  278711 kubeadm.go:392] StartCluster: {Name:test-preload-968336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-968336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:43:18.601872  278711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:43:18.601968  278711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:43:18.642911  278711 cri.go:89] found id: ""
	I0920 18:43:18.642988  278711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:43:18.653441  278711 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:43:18.653465  278711 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:43:18.653523  278711 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:43:18.663843  278711 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:43:18.664269  278711 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-968336" does not appear in /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:43:18.664391  278711 kubeconfig.go:62] /home/jenkins/minikube-integration/19679-237658/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-968336" cluster setting kubeconfig missing "test-preload-968336" context setting]
	I0920 18:43:18.664713  278711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:43:18.665348  278711 kapi.go:59] client config for test-preload-968336: &rest.Config{Host:"https://192.168.39.205:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/test-preload-968336/client.crt", KeyFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/test-preload-968336/client.key", CAFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 18:43:18.666105  278711 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:43:18.676127  278711 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.205
	I0920 18:43:18.676184  278711 kubeadm.go:1160] stopping kube-system containers ...
	I0920 18:43:18.676201  278711 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 18:43:18.676267  278711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:43:18.710512  278711 cri.go:89] found id: ""
	I0920 18:43:18.710586  278711 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 18:43:18.727267  278711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:43:18.737550  278711 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:43:18.737574  278711 kubeadm.go:157] found existing configuration files:
	
	I0920 18:43:18.737646  278711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:43:18.747734  278711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:43:18.747809  278711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:43:18.757970  278711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:43:18.767283  278711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:43:18.767351  278711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:43:18.776709  278711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:43:18.785808  278711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:43:18.785873  278711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:43:18.795318  278711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:43:18.804288  278711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:43:18.804364  278711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:43:18.814064  278711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:43:18.824306  278711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:43:18.922565  278711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:43:19.987686  278711 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.06508128s)
	I0920 18:43:19.987734  278711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:43:20.251443  278711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:43:20.335370  278711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:43:20.416356  278711 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:43:20.416453  278711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:43:20.916543  278711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:43:21.416809  278711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:43:21.439355  278711 api_server.go:72] duration metric: took 1.023001389s to wait for apiserver process to appear ...
	I0920 18:43:21.439383  278711 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:43:21.439406  278711 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0920 18:43:21.439884  278711 api_server.go:269] stopped: https://192.168.39.205:8443/healthz: Get "https://192.168.39.205:8443/healthz": dial tcp 192.168.39.205:8443: connect: connection refused
	I0920 18:43:21.939982  278711 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0920 18:43:25.638316  278711 api_server.go:279] https://192.168.39.205:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 18:43:25.638356  278711 api_server.go:103] status: https://192.168.39.205:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 18:43:25.638373  278711 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0920 18:43:25.704327  278711 api_server.go:279] https://192.168.39.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:43:25.704364  278711 api_server.go:103] status: https://192.168.39.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:43:25.939623  278711 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0920 18:43:25.945158  278711 api_server.go:279] https://192.168.39.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:43:25.945198  278711 api_server.go:103] status: https://192.168.39.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:43:26.439768  278711 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0920 18:43:26.448700  278711 api_server.go:279] https://192.168.39.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 18:43:26.448783  278711 api_server.go:103] status: https://192.168.39.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 18:43:26.940406  278711 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0920 18:43:26.947186  278711 api_server.go:279] https://192.168.39.205:8443/healthz returned 200:
	ok
	I0920 18:43:26.953303  278711 api_server.go:141] control plane version: v1.24.4
	I0920 18:43:26.953334  278711 api_server.go:131] duration metric: took 5.513944701s to wait for apiserver health ...
	I0920 18:43:26.953343  278711 cni.go:84] Creating CNI manager for ""
	I0920 18:43:26.953350  278711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:43:26.954939  278711 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:43:26.956240  278711 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:43:26.967304  278711 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:43:26.987634  278711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:43:26.987743  278711 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 18:43:26.987760  278711 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 18:43:27.008573  278711 system_pods.go:59] 7 kube-system pods found
	I0920 18:43:27.008615  278711 system_pods.go:61] "coredns-6d4b75cb6d-xrndt" [267c68e5-d6c2-4bf7-915e-e8970b7c9323] Running
	I0920 18:43:27.008624  278711 system_pods.go:61] "etcd-test-preload-968336" [c1b6d833-86a0-411c-813e-71b31e0f8f4e] Running
	I0920 18:43:27.008629  278711 system_pods.go:61] "kube-apiserver-test-preload-968336" [f9e318f4-78e6-4dd9-bb7e-99d70202dec5] Running
	I0920 18:43:27.008636  278711 system_pods.go:61] "kube-controller-manager-test-preload-968336" [ceb1fa0e-bfa7-4c88-af92-c13f0a1ffaf2] Running
	I0920 18:43:27.008650  278711 system_pods.go:61] "kube-proxy-4wflr" [8b601af3-dc4b-4b11-a089-2682b802698c] Running
	I0920 18:43:27.008665  278711 system_pods.go:61] "kube-scheduler-test-preload-968336" [34eb138e-d020-4301-a54b-938afe360b90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 18:43:27.008674  278711 system_pods.go:61] "storage-provisioner" [cc88aebd-df1d-443f-ac87-2008f728ba9e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 18:43:27.008687  278711 system_pods.go:74] duration metric: took 21.017097ms to wait for pod list to return data ...
	I0920 18:43:27.008702  278711 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:43:27.012894  278711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:43:27.012934  278711 node_conditions.go:123] node cpu capacity is 2
	I0920 18:43:27.012959  278711 node_conditions.go:105] duration metric: took 4.248169ms to run NodePressure ...
	I0920 18:43:27.012981  278711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 18:43:27.256833  278711 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 18:43:27.262953  278711 kubeadm.go:739] kubelet initialised
	I0920 18:43:27.262987  278711 kubeadm.go:740] duration metric: took 6.123966ms waiting for restarted kubelet to initialise ...
	I0920 18:43:27.262997  278711 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:43:27.269848  278711 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-xrndt" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:27.278907  278711 pod_ready.go:98] node "test-preload-968336" hosting pod "coredns-6d4b75cb6d-xrndt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-968336" has status "Ready":"False"
	I0920 18:43:27.278943  278711 pod_ready.go:82] duration metric: took 9.056922ms for pod "coredns-6d4b75cb6d-xrndt" in "kube-system" namespace to be "Ready" ...
	E0920 18:43:27.278954  278711 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-968336" hosting pod "coredns-6d4b75cb6d-xrndt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-968336" has status "Ready":"False"
	I0920 18:43:27.278964  278711 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:27.285145  278711 pod_ready.go:98] node "test-preload-968336" hosting pod "etcd-test-preload-968336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-968336" has status "Ready":"False"
	I0920 18:43:27.285177  278711 pod_ready.go:82] duration metric: took 6.199673ms for pod "etcd-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	E0920 18:43:27.285190  278711 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-968336" hosting pod "etcd-test-preload-968336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-968336" has status "Ready":"False"
	I0920 18:43:27.285199  278711 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:27.296161  278711 pod_ready.go:98] node "test-preload-968336" hosting pod "kube-apiserver-test-preload-968336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-968336" has status "Ready":"False"
	I0920 18:43:27.296192  278711 pod_ready.go:82] duration metric: took 10.981928ms for pod "kube-apiserver-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	E0920 18:43:27.296202  278711 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-968336" hosting pod "kube-apiserver-test-preload-968336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-968336" has status "Ready":"False"
	I0920 18:43:27.296209  278711 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:27.393590  278711 pod_ready.go:98] node "test-preload-968336" hosting pod "kube-controller-manager-test-preload-968336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-968336" has status "Ready":"False"
	I0920 18:43:27.393616  278711 pod_ready.go:82] duration metric: took 97.397189ms for pod "kube-controller-manager-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	E0920 18:43:27.393627  278711 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-968336" hosting pod "kube-controller-manager-test-preload-968336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-968336" has status "Ready":"False"
	I0920 18:43:27.393632  278711 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4wflr" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:27.820150  278711 pod_ready.go:98] node "test-preload-968336" hosting pod "kube-proxy-4wflr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-968336" has status "Ready":"False"
	I0920 18:43:27.820179  278711 pod_ready.go:82] duration metric: took 426.537646ms for pod "kube-proxy-4wflr" in "kube-system" namespace to be "Ready" ...
	E0920 18:43:27.820188  278711 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-968336" hosting pod "kube-proxy-4wflr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-968336" has status "Ready":"False"
	I0920 18:43:27.820193  278711 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:28.194472  278711 pod_ready.go:98] node "test-preload-968336" hosting pod "kube-scheduler-test-preload-968336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-968336" has status "Ready":"False"
	I0920 18:43:28.194499  278711 pod_ready.go:82] duration metric: took 374.29929ms for pod "kube-scheduler-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	E0920 18:43:28.194508  278711 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-968336" hosting pod "kube-scheduler-test-preload-968336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-968336" has status "Ready":"False"
	I0920 18:43:28.194515  278711 pod_ready.go:39] duration metric: took 931.506419ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:43:28.194535  278711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:43:28.206547  278711 ops.go:34] apiserver oom_adj: -16
	I0920 18:43:28.206571  278711 kubeadm.go:597] duration metric: took 9.553099469s to restartPrimaryControlPlane
	I0920 18:43:28.206580  278711 kubeadm.go:394] duration metric: took 9.604811453s to StartCluster
	I0920 18:43:28.206605  278711 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:43:28.206682  278711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:43:28.207303  278711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:43:28.207558  278711 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:43:28.207645  278711 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:43:28.207768  278711 addons.go:69] Setting storage-provisioner=true in profile "test-preload-968336"
	I0920 18:43:28.207786  278711 addons.go:69] Setting default-storageclass=true in profile "test-preload-968336"
	I0920 18:43:28.207810  278711 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-968336"
	I0920 18:43:28.207853  278711 config.go:182] Loaded profile config "test-preload-968336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0920 18:43:28.207790  278711 addons.go:234] Setting addon storage-provisioner=true in "test-preload-968336"
	W0920 18:43:28.207885  278711 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:43:28.207919  278711 host.go:66] Checking if "test-preload-968336" exists ...
	I0920 18:43:28.208127  278711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:43:28.208165  278711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:43:28.208243  278711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:43:28.208288  278711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:43:28.209566  278711 out.go:177] * Verifying Kubernetes components...
	I0920 18:43:28.211333  278711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:43:28.223860  278711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I0920 18:43:28.224085  278711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33383
	I0920 18:43:28.224494  278711 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:43:28.224542  278711 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:43:28.225036  278711 main.go:141] libmachine: Using API Version  1
	I0920 18:43:28.225063  278711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:43:28.225134  278711 main.go:141] libmachine: Using API Version  1
	I0920 18:43:28.225150  278711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:43:28.225421  278711 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:43:28.225518  278711 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:43:28.225685  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetState
	I0920 18:43:28.226017  278711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:43:28.226062  278711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:43:28.227861  278711 kapi.go:59] client config for test-preload-968336: &rest.Config{Host:"https://192.168.39.205:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/test-preload-968336/client.crt", KeyFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/profiles/test-preload-968336/client.key", CAFile:"/home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 18:43:28.228091  278711 addons.go:234] Setting addon default-storageclass=true in "test-preload-968336"
	W0920 18:43:28.228105  278711 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:43:28.228127  278711 host.go:66] Checking if "test-preload-968336" exists ...
	I0920 18:43:28.228355  278711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:43:28.228393  278711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:43:28.242195  278711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I0920 18:43:28.242743  278711 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:43:28.242781  278711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
	I0920 18:43:28.243222  278711 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:43:28.243316  278711 main.go:141] libmachine: Using API Version  1
	I0920 18:43:28.243337  278711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:43:28.243691  278711 main.go:141] libmachine: Using API Version  1
	I0920 18:43:28.243707  278711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:43:28.243755  278711 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:43:28.243956  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetState
	I0920 18:43:28.244008  278711 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:43:28.244574  278711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:43:28.244621  278711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:43:28.245588  278711 main.go:141] libmachine: (test-preload-968336) Calling .DriverName
	I0920 18:43:28.248302  278711 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:43:28.250059  278711 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:43:28.250080  278711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:43:28.250098  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHHostname
	I0920 18:43:28.253263  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:28.253695  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:28.253725  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:28.253897  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHPort
	I0920 18:43:28.254110  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:28.254290  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHUsername
	I0920 18:43:28.254414  278711 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/test-preload-968336/id_rsa Username:docker}
	I0920 18:43:28.293627  278711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38433
	I0920 18:43:28.294210  278711 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:43:28.294727  278711 main.go:141] libmachine: Using API Version  1
	I0920 18:43:28.294750  278711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:43:28.295086  278711 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:43:28.295274  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetState
	I0920 18:43:28.296838  278711 main.go:141] libmachine: (test-preload-968336) Calling .DriverName
	I0920 18:43:28.297092  278711 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:43:28.297129  278711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:43:28.297157  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHHostname
	I0920 18:43:28.300032  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:28.300707  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHPort
	I0920 18:43:28.300745  278711 main.go:141] libmachine: (test-preload-968336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:08:c1", ip: ""} in network mk-test-preload-968336: {Iface:virbr1 ExpiryTime:2024-09-20 19:41:06 +0000 UTC Type:0 Mac:52:54:00:cc:08:c1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-968336 Clientid:01:52:54:00:cc:08:c1}
	I0920 18:43:28.300768  278711 main.go:141] libmachine: (test-preload-968336) DBG | domain test-preload-968336 has defined IP address 192.168.39.205 and MAC address 52:54:00:cc:08:c1 in network mk-test-preload-968336
	I0920 18:43:28.300891  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHKeyPath
	I0920 18:43:28.301057  278711 main.go:141] libmachine: (test-preload-968336) Calling .GetSSHUsername
	I0920 18:43:28.301232  278711 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/test-preload-968336/id_rsa Username:docker}
	I0920 18:43:28.380645  278711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:43:28.399159  278711 node_ready.go:35] waiting up to 6m0s for node "test-preload-968336" to be "Ready" ...
	I0920 18:43:28.513878  278711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:43:28.519116  278711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:43:29.498165  278711 main.go:141] libmachine: Making call to close driver server
	I0920 18:43:29.498194  278711 main.go:141] libmachine: (test-preload-968336) Calling .Close
	I0920 18:43:29.498309  278711 main.go:141] libmachine: Making call to close driver server
	I0920 18:43:29.498330  278711 main.go:141] libmachine: (test-preload-968336) Calling .Close
	I0920 18:43:29.498484  278711 main.go:141] libmachine: (test-preload-968336) DBG | Closing plugin on server side
	I0920 18:43:29.498525  278711 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:43:29.498534  278711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:43:29.498543  278711 main.go:141] libmachine: Making call to close driver server
	I0920 18:43:29.498552  278711 main.go:141] libmachine: (test-preload-968336) Calling .Close
	I0920 18:43:29.498666  278711 main.go:141] libmachine: (test-preload-968336) DBG | Closing plugin on server side
	I0920 18:43:29.498718  278711 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:43:29.498738  278711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:43:29.498749  278711 main.go:141] libmachine: Making call to close driver server
	I0920 18:43:29.498761  278711 main.go:141] libmachine: (test-preload-968336) Calling .Close
	I0920 18:43:29.498796  278711 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:43:29.498825  278711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:43:29.498987  278711 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:43:29.498999  278711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:43:29.499025  278711 main.go:141] libmachine: (test-preload-968336) DBG | Closing plugin on server side
	I0920 18:43:29.506624  278711 main.go:141] libmachine: Making call to close driver server
	I0920 18:43:29.506643  278711 main.go:141] libmachine: (test-preload-968336) Calling .Close
	I0920 18:43:29.506918  278711 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:43:29.506939  278711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:43:29.506952  278711 main.go:141] libmachine: (test-preload-968336) DBG | Closing plugin on server side
	I0920 18:43:29.509153  278711 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0920 18:43:29.510506  278711 addons.go:510] duration metric: took 1.302867919s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0920 18:43:30.403271  278711 node_ready.go:53] node "test-preload-968336" has status "Ready":"False"
	I0920 18:43:32.903235  278711 node_ready.go:53] node "test-preload-968336" has status "Ready":"False"
	I0920 18:43:34.903343  278711 node_ready.go:53] node "test-preload-968336" has status "Ready":"False"
	I0920 18:43:36.403791  278711 node_ready.go:49] node "test-preload-968336" has status "Ready":"True"
	I0920 18:43:36.403818  278711 node_ready.go:38] duration metric: took 8.004618305s for node "test-preload-968336" to be "Ready" ...
	I0920 18:43:36.403827  278711 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:43:36.409039  278711 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-xrndt" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:36.414367  278711 pod_ready.go:93] pod "coredns-6d4b75cb6d-xrndt" in "kube-system" namespace has status "Ready":"True"
	I0920 18:43:36.414393  278711 pod_ready.go:82] duration metric: took 5.318803ms for pod "coredns-6d4b75cb6d-xrndt" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:36.414401  278711 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:36.419589  278711 pod_ready.go:93] pod "etcd-test-preload-968336" in "kube-system" namespace has status "Ready":"True"
	I0920 18:43:36.419622  278711 pod_ready.go:82] duration metric: took 5.214013ms for pod "etcd-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:36.419630  278711 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:38.427097  278711 pod_ready.go:103] pod "kube-apiserver-test-preload-968336" in "kube-system" namespace has status "Ready":"False"
	I0920 18:43:39.932584  278711 pod_ready.go:93] pod "kube-apiserver-test-preload-968336" in "kube-system" namespace has status "Ready":"True"
	I0920 18:43:39.932621  278711 pod_ready.go:82] duration metric: took 3.512976375s for pod "kube-apiserver-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:39.932634  278711 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:40.939551  278711 pod_ready.go:93] pod "kube-controller-manager-test-preload-968336" in "kube-system" namespace has status "Ready":"True"
	I0920 18:43:40.939578  278711 pod_ready.go:82] duration metric: took 1.006935718s for pod "kube-controller-manager-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:40.939592  278711 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4wflr" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:40.946071  278711 pod_ready.go:93] pod "kube-proxy-4wflr" in "kube-system" namespace has status "Ready":"True"
	I0920 18:43:40.946095  278711 pod_ready.go:82] duration metric: took 6.495275ms for pod "kube-proxy-4wflr" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:40.946104  278711 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:40.951810  278711 pod_ready.go:93] pod "kube-scheduler-test-preload-968336" in "kube-system" namespace has status "Ready":"True"
	I0920 18:43:40.951836  278711 pod_ready.go:82] duration metric: took 5.72437ms for pod "kube-scheduler-test-preload-968336" in "kube-system" namespace to be "Ready" ...
	I0920 18:43:40.951846  278711 pod_ready.go:39] duration metric: took 4.548004261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:43:40.951861  278711 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:43:40.951928  278711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:43:40.966837  278711 api_server.go:72] duration metric: took 12.75924238s to wait for apiserver process to appear ...
	I0920 18:43:40.966888  278711 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:43:40.966915  278711 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0920 18:43:40.974553  278711 api_server.go:279] https://192.168.39.205:8443/healthz returned 200:
	ok
	I0920 18:43:40.975875  278711 api_server.go:141] control plane version: v1.24.4
	I0920 18:43:40.975901  278711 api_server.go:131] duration metric: took 9.004261ms to wait for apiserver health ...
	I0920 18:43:40.975911  278711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:43:41.009295  278711 system_pods.go:59] 7 kube-system pods found
	I0920 18:43:41.009333  278711 system_pods.go:61] "coredns-6d4b75cb6d-xrndt" [267c68e5-d6c2-4bf7-915e-e8970b7c9323] Running
	I0920 18:43:41.009339  278711 system_pods.go:61] "etcd-test-preload-968336" [c1b6d833-86a0-411c-813e-71b31e0f8f4e] Running
	I0920 18:43:41.009345  278711 system_pods.go:61] "kube-apiserver-test-preload-968336" [f9e318f4-78e6-4dd9-bb7e-99d70202dec5] Running
	I0920 18:43:41.009349  278711 system_pods.go:61] "kube-controller-manager-test-preload-968336" [ceb1fa0e-bfa7-4c88-af92-c13f0a1ffaf2] Running
	I0920 18:43:41.009352  278711 system_pods.go:61] "kube-proxy-4wflr" [8b601af3-dc4b-4b11-a089-2682b802698c] Running
	I0920 18:43:41.009355  278711 system_pods.go:61] "kube-scheduler-test-preload-968336" [34eb138e-d020-4301-a54b-938afe360b90] Running
	I0920 18:43:41.009362  278711 system_pods.go:61] "storage-provisioner" [cc88aebd-df1d-443f-ac87-2008f728ba9e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 18:43:41.009370  278711 system_pods.go:74] duration metric: took 33.452297ms to wait for pod list to return data ...
	I0920 18:43:41.009385  278711 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:43:41.204234  278711 default_sa.go:45] found service account: "default"
	I0920 18:43:41.204268  278711 default_sa.go:55] duration metric: took 194.876868ms for default service account to be created ...
	I0920 18:43:41.204281  278711 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:43:41.408612  278711 system_pods.go:86] 7 kube-system pods found
	I0920 18:43:41.408655  278711 system_pods.go:89] "coredns-6d4b75cb6d-xrndt" [267c68e5-d6c2-4bf7-915e-e8970b7c9323] Running
	I0920 18:43:41.408663  278711 system_pods.go:89] "etcd-test-preload-968336" [c1b6d833-86a0-411c-813e-71b31e0f8f4e] Running
	I0920 18:43:41.408668  278711 system_pods.go:89] "kube-apiserver-test-preload-968336" [f9e318f4-78e6-4dd9-bb7e-99d70202dec5] Running
	I0920 18:43:41.408675  278711 system_pods.go:89] "kube-controller-manager-test-preload-968336" [ceb1fa0e-bfa7-4c88-af92-c13f0a1ffaf2] Running
	I0920 18:43:41.408679  278711 system_pods.go:89] "kube-proxy-4wflr" [8b601af3-dc4b-4b11-a089-2682b802698c] Running
	I0920 18:43:41.408683  278711 system_pods.go:89] "kube-scheduler-test-preload-968336" [34eb138e-d020-4301-a54b-938afe360b90] Running
	I0920 18:43:41.408689  278711 system_pods.go:89] "storage-provisioner" [cc88aebd-df1d-443f-ac87-2008f728ba9e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 18:43:41.408699  278711 system_pods.go:126] duration metric: took 204.410915ms to wait for k8s-apps to be running ...
	I0920 18:43:41.408714  278711 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:43:41.408772  278711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:43:41.423305  278711 system_svc.go:56] duration metric: took 14.580304ms WaitForService to wait for kubelet
	I0920 18:43:41.423340  278711 kubeadm.go:582] duration metric: took 13.215753812s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:43:41.423361  278711 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:43:41.603843  278711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:43:41.603883  278711 node_conditions.go:123] node cpu capacity is 2
	I0920 18:43:41.603898  278711 node_conditions.go:105] duration metric: took 180.530107ms to run NodePressure ...
	I0920 18:43:41.603912  278711 start.go:241] waiting for startup goroutines ...
	I0920 18:43:41.603922  278711 start.go:246] waiting for cluster config update ...
	I0920 18:43:41.603932  278711 start.go:255] writing updated cluster config ...
	I0920 18:43:41.604259  278711 ssh_runner.go:195] Run: rm -f paused
	I0920 18:43:41.652949  278711 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I0920 18:43:41.655266  278711 out.go:201] 
	W0920 18:43:41.656688  278711 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0920 18:43:41.658074  278711 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0920 18:43:41.659663  278711 out.go:177] * Done! kubectl is now configured to use "test-preload-968336" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.586469556Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7696094f8a61fb35b0aca8bbab203e6e6a75bf19206e18ea022bef594fed2019,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-xrndt,Uid:267c68e5-d6c2-4bf7-915e-e8970b7c9323,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726857814393022094,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-xrndt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 267c68e5-d6c2-4bf7-915e-e8970b7c9323,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:43:26.375636962Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:12cad447e267dc18ffd6725af6c8aa488c9c8cfdcf5098e698ae2c6e5a6b44ca,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cc88aebd-df1d-443f-ac87-2008f728ba9e,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726857807284759974,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc88aebd-df1d-443f-ac87-2008f728ba9e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-20T18:43:26.375634900Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f875c03eeccdb14e96c91aad46c395e7a51470b72c3a29f959d574c4bee03f9,Metadata:&PodSandboxMetadata{Name:kube-proxy-4wflr,Uid:8b601af3-dc4b-4b11-a089-2682b802698c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726857806999097174,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4wflr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b601af3-dc4b-4b11-a089-2682b802698c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:43:26.375631552Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:42d572deb1fb80de1dc3002b32361c6a06695371ae8159e0508ec402bba4a536,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-968336,Uid:0710f1e
863e4afedcc2546c4b0c5c82f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726857800920606028,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0710f1e863e4afedcc2546c4b0c5c82f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.205:8443,kubernetes.io/config.hash: 0710f1e863e4afedcc2546c4b0c5c82f,kubernetes.io/config.seen: 2024-09-20T18:43:20.381471172Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:af3949df85b49442b06945c227b361c93958f828027e326a95ad441676d53869,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-968336,Uid:2aaa4ea74d36e0d7933e5955f3b05702,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726857800919590237,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-
test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2aaa4ea74d36e0d7933e5955f3b05702,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.205:2379,kubernetes.io/config.hash: 2aaa4ea74d36e0d7933e5955f3b05702,kubernetes.io/config.seen: 2024-09-20T18:43:20.431480424Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9b870311f1e1c324c0c2ab942714071bf6efed3894737d97d8102e6ffa8d456e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-968336,Uid:d9eaf7716b3a0482adca9fe39cfd2596,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726857800911482303,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9eaf7716b3a0482adca9fe39cfd2596,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d9eaf7716b3a048
2adca9fe39cfd2596,kubernetes.io/config.seen: 2024-09-20T18:43:20.381512309Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7c7106f90d52cced50c3168ccd742cad870f4198fb8b8b5973dac0fb4e9b0920,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-968336,Uid:dd9aadc2e78ee5045d55e4db5d621994,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726857800909606829,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd9aadc2e78ee5045d55e4db5d621994,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dd9aadc2e78ee5045d55e4db5d621994,kubernetes.io/config.seen: 2024-09-20T18:43:20.381510768Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4901e0bf-4230-4d1e-8fbd-abf05955f4c2 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.587270851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ecee998-59ee-4d1e-b660-ec8c28f80522 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.587357378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ecee998-59ee-4d1e-b660-ec8c28f80522 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.587560936Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d466ef0a3907594caf7dee705ac03efcb373add26802656b157782e01876642,PodSandboxId:7696094f8a61fb35b0aca8bbab203e6e6a75bf19206e18ea022bef594fed2019,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726857814610031031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xrndt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 267c68e5-d6c2-4bf7-915e-e8970b7c9323,},Annotations:map[string]string{io.kubernetes.container.hash: 482e4c23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00af446faade18f8ee76f71c9bf56585cbb7fea668a849491ff363f5ddb3d189,PodSandboxId:12cad447e267dc18ffd6725af6c8aa488c9c8cfdcf5098e698ae2c6e5a6b44ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726857807537586699,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: cc88aebd-df1d-443f-ac87-2008f728ba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 99c2ac8d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16c8e9f94e9ddabb2fed9e570a529133d9b4097055dacf8c96871f72adcaa0c,PodSandboxId:9f875c03eeccdb14e96c91aad46c395e7a51470b72c3a29f959d574c4bee03f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726857807136393529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wflr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6
01af3-dc4b-4b11-a089-2682b802698c,},Annotations:map[string]string{io.kubernetes.container.hash: aa041458,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1e2ad92b6f6394ff19111c24523c212050e6106dff7684dba5943641d57dc5,PodSandboxId:7c7106f90d52cced50c3168ccd742cad870f4198fb8b8b5973dac0fb4e9b0920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726857801121209814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: dd9aadc2e78ee5045d55e4db5d621994,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5730a01c5c820fd1aa15844046601298b0542294c7115922e6801e0d803ad101,PodSandboxId:9b870311f1e1c324c0c2ab942714071bf6efed3894737d97d8102e6ffa8d456e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726857801178849531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: d9eaf7716b3a0482adca9fe39cfd2596,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a8f0a6480518de3809953bf366afc3f6a374624dff6219dd530d31df9766a19,PodSandboxId:af3949df85b49442b06945c227b361c93958f828027e326a95ad441676d53869,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726857801077902956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2aaa4ea74d36e0d7933e5955f3b05702,},
Annotations:map[string]string{io.kubernetes.container.hash: 45b7c189,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03195db2eb0b9c2d35692dbc18f5e3193861e8412858c74d82ae11c8b3da1f5c,PodSandboxId:42d572deb1fb80de1dc3002b32361c6a06695371ae8159e0508ec402bba4a536,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726857801118589604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0710f1e863e4afedcc2546c4b0c5c82f,},Annotations
:map[string]string{io.kubernetes.container.hash: b60d1493,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ecee998-59ee-4d1e-b660-ec8c28f80522 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.597721502Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92f83e39-2c7a-4dfc-aae1-110370ff7818 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.597803818Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92f83e39-2c7a-4dfc-aae1-110370ff7818 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.599108479Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a13613c-cb00-468c-ac69-ebb579af4a25 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.599595837Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857822599572144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a13613c-cb00-468c-ac69-ebb579af4a25 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.600201019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9095c9b8-3e4a-47cb-b413-7c844ee50b91 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.600249026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9095c9b8-3e4a-47cb-b413-7c844ee50b91 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.600502616Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d466ef0a3907594caf7dee705ac03efcb373add26802656b157782e01876642,PodSandboxId:7696094f8a61fb35b0aca8bbab203e6e6a75bf19206e18ea022bef594fed2019,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726857814610031031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xrndt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 267c68e5-d6c2-4bf7-915e-e8970b7c9323,},Annotations:map[string]string{io.kubernetes.container.hash: 482e4c23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00af446faade18f8ee76f71c9bf56585cbb7fea668a849491ff363f5ddb3d189,PodSandboxId:12cad447e267dc18ffd6725af6c8aa488c9c8cfdcf5098e698ae2c6e5a6b44ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726857807537586699,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: cc88aebd-df1d-443f-ac87-2008f728ba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 99c2ac8d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16c8e9f94e9ddabb2fed9e570a529133d9b4097055dacf8c96871f72adcaa0c,PodSandboxId:9f875c03eeccdb14e96c91aad46c395e7a51470b72c3a29f959d574c4bee03f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726857807136393529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wflr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6
01af3-dc4b-4b11-a089-2682b802698c,},Annotations:map[string]string{io.kubernetes.container.hash: aa041458,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1e2ad92b6f6394ff19111c24523c212050e6106dff7684dba5943641d57dc5,PodSandboxId:7c7106f90d52cced50c3168ccd742cad870f4198fb8b8b5973dac0fb4e9b0920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726857801121209814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: dd9aadc2e78ee5045d55e4db5d621994,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5730a01c5c820fd1aa15844046601298b0542294c7115922e6801e0d803ad101,PodSandboxId:9b870311f1e1c324c0c2ab942714071bf6efed3894737d97d8102e6ffa8d456e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726857801178849531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: d9eaf7716b3a0482adca9fe39cfd2596,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a8f0a6480518de3809953bf366afc3f6a374624dff6219dd530d31df9766a19,PodSandboxId:af3949df85b49442b06945c227b361c93958f828027e326a95ad441676d53869,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726857801077902956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2aaa4ea74d36e0d7933e5955f3b05702,},
Annotations:map[string]string{io.kubernetes.container.hash: 45b7c189,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03195db2eb0b9c2d35692dbc18f5e3193861e8412858c74d82ae11c8b3da1f5c,PodSandboxId:42d572deb1fb80de1dc3002b32361c6a06695371ae8159e0508ec402bba4a536,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726857801118589604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0710f1e863e4afedcc2546c4b0c5c82f,},Annotations
:map[string]string{io.kubernetes.container.hash: b60d1493,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9095c9b8-3e4a-47cb-b413-7c844ee50b91 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.635281966Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=128ab63f-97cb-4caf-9350-b36c24e81d34 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.635406455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=128ab63f-97cb-4caf-9350-b36c24e81d34 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.636392029Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=952152b3-7244-45cf-98e9-2ec0a6f81656 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.636829581Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857822636807871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=952152b3-7244-45cf-98e9-2ec0a6f81656 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.637434207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc1735c7-3a86-4a5b-8f7f-9e22f1ff12f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.637485751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc1735c7-3a86-4a5b-8f7f-9e22f1ff12f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.637679951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d466ef0a3907594caf7dee705ac03efcb373add26802656b157782e01876642,PodSandboxId:7696094f8a61fb35b0aca8bbab203e6e6a75bf19206e18ea022bef594fed2019,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726857814610031031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xrndt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 267c68e5-d6c2-4bf7-915e-e8970b7c9323,},Annotations:map[string]string{io.kubernetes.container.hash: 482e4c23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00af446faade18f8ee76f71c9bf56585cbb7fea668a849491ff363f5ddb3d189,PodSandboxId:12cad447e267dc18ffd6725af6c8aa488c9c8cfdcf5098e698ae2c6e5a6b44ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726857807537586699,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: cc88aebd-df1d-443f-ac87-2008f728ba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 99c2ac8d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16c8e9f94e9ddabb2fed9e570a529133d9b4097055dacf8c96871f72adcaa0c,PodSandboxId:9f875c03eeccdb14e96c91aad46c395e7a51470b72c3a29f959d574c4bee03f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726857807136393529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wflr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6
01af3-dc4b-4b11-a089-2682b802698c,},Annotations:map[string]string{io.kubernetes.container.hash: aa041458,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1e2ad92b6f6394ff19111c24523c212050e6106dff7684dba5943641d57dc5,PodSandboxId:7c7106f90d52cced50c3168ccd742cad870f4198fb8b8b5973dac0fb4e9b0920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726857801121209814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: dd9aadc2e78ee5045d55e4db5d621994,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5730a01c5c820fd1aa15844046601298b0542294c7115922e6801e0d803ad101,PodSandboxId:9b870311f1e1c324c0c2ab942714071bf6efed3894737d97d8102e6ffa8d456e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726857801178849531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: d9eaf7716b3a0482adca9fe39cfd2596,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a8f0a6480518de3809953bf366afc3f6a374624dff6219dd530d31df9766a19,PodSandboxId:af3949df85b49442b06945c227b361c93958f828027e326a95ad441676d53869,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726857801077902956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2aaa4ea74d36e0d7933e5955f3b05702,},
Annotations:map[string]string{io.kubernetes.container.hash: 45b7c189,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03195db2eb0b9c2d35692dbc18f5e3193861e8412858c74d82ae11c8b3da1f5c,PodSandboxId:42d572deb1fb80de1dc3002b32361c6a06695371ae8159e0508ec402bba4a536,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726857801118589604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0710f1e863e4afedcc2546c4b0c5c82f,},Annotations
:map[string]string{io.kubernetes.container.hash: b60d1493,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc1735c7-3a86-4a5b-8f7f-9e22f1ff12f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.672674231Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd538eb6-cd0b-46aa-bd32-a3229d284ed0 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.672751104Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd538eb6-cd0b-46aa-bd32-a3229d284ed0 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.673813972Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ca4b35f-4ddd-48e4-a872-374f8b6f4d20 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.674572519Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857822674541186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ca4b35f-4ddd-48e4-a872-374f8b6f4d20 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.675094874Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f76b4a57-f22e-4b5b-95d4-b6696c18109f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.675145979Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f76b4a57-f22e-4b5b-95d4-b6696c18109f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:43:42 test-preload-968336 crio[671]: time="2024-09-20 18:43:42.675512450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d466ef0a3907594caf7dee705ac03efcb373add26802656b157782e01876642,PodSandboxId:7696094f8a61fb35b0aca8bbab203e6e6a75bf19206e18ea022bef594fed2019,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726857814610031031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xrndt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 267c68e5-d6c2-4bf7-915e-e8970b7c9323,},Annotations:map[string]string{io.kubernetes.container.hash: 482e4c23,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00af446faade18f8ee76f71c9bf56585cbb7fea668a849491ff363f5ddb3d189,PodSandboxId:12cad447e267dc18ffd6725af6c8aa488c9c8cfdcf5098e698ae2c6e5a6b44ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726857807537586699,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: cc88aebd-df1d-443f-ac87-2008f728ba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 99c2ac8d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16c8e9f94e9ddabb2fed9e570a529133d9b4097055dacf8c96871f72adcaa0c,PodSandboxId:9f875c03eeccdb14e96c91aad46c395e7a51470b72c3a29f959d574c4bee03f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726857807136393529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wflr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6
01af3-dc4b-4b11-a089-2682b802698c,},Annotations:map[string]string{io.kubernetes.container.hash: aa041458,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1e2ad92b6f6394ff19111c24523c212050e6106dff7684dba5943641d57dc5,PodSandboxId:7c7106f90d52cced50c3168ccd742cad870f4198fb8b8b5973dac0fb4e9b0920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726857801121209814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: dd9aadc2e78ee5045d55e4db5d621994,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5730a01c5c820fd1aa15844046601298b0542294c7115922e6801e0d803ad101,PodSandboxId:9b870311f1e1c324c0c2ab942714071bf6efed3894737d97d8102e6ffa8d456e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726857801178849531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: d9eaf7716b3a0482adca9fe39cfd2596,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a8f0a6480518de3809953bf366afc3f6a374624dff6219dd530d31df9766a19,PodSandboxId:af3949df85b49442b06945c227b361c93958f828027e326a95ad441676d53869,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726857801077902956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2aaa4ea74d36e0d7933e5955f3b05702,},
Annotations:map[string]string{io.kubernetes.container.hash: 45b7c189,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03195db2eb0b9c2d35692dbc18f5e3193861e8412858c74d82ae11c8b3da1f5c,PodSandboxId:42d572deb1fb80de1dc3002b32361c6a06695371ae8159e0508ec402bba4a536,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726857801118589604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-968336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0710f1e863e4afedcc2546c4b0c5c82f,},Annotations
:map[string]string{io.kubernetes.container.hash: b60d1493,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f76b4a57-f22e-4b5b-95d4-b6696c18109f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8d466ef0a3907       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   7696094f8a61f       coredns-6d4b75cb6d-xrndt
	00af446faade1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Exited              storage-provisioner       2                   12cad447e267d       storage-provisioner
	a16c8e9f94e9d       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   9f875c03eeccd       kube-proxy-4wflr
	5730a01c5c820       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   9b870311f1e1c       kube-scheduler-test-preload-968336
	0b1e2ad92b6f6       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   7c7106f90d52c       kube-controller-manager-test-preload-968336
	03195db2eb0b9       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   42d572deb1fb8       kube-apiserver-test-preload-968336
	1a8f0a6480518       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   af3949df85b49       etcd-test-preload-968336
	
	
	==> coredns [8d466ef0a3907594caf7dee705ac03efcb373add26802656b157782e01876642] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:58989 - 26360 "HINFO IN 1061419319829480134.8422040905467101310. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033765675s
	
	
	==> describe nodes <==
	Name:               test-preload-968336
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-968336
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=test-preload-968336
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_42_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:42:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-968336
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:43:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:43:35 +0000   Fri, 20 Sep 2024 18:41:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:43:35 +0000   Fri, 20 Sep 2024 18:41:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:43:35 +0000   Fri, 20 Sep 2024 18:41:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:43:35 +0000   Fri, 20 Sep 2024 18:43:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    test-preload-968336
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c207683ba81469cb7f668399c2dd6da
	  System UUID:                3c207683-ba81-469c-b7f6-68399c2dd6da
	  Boot ID:                    6641d6f2-3e9b-4c83-9203-eef176bbb71f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-xrndt                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     86s
	  kube-system                 etcd-test-preload-968336                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         100s
	  kube-system                 kube-apiserver-test-preload-968336             250m (12%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-test-preload-968336    200m (10%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-4wflr                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-scheduler-test-preload-968336             100m (5%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  Starting                 84s                kube-proxy       
	  Normal  Starting                 99s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  99s                kubelet          Node test-preload-968336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                kubelet          Node test-preload-968336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                kubelet          Node test-preload-968336 status is now: NodeHasSufficientPID
	  Normal  NodeReady                88s                kubelet          Node test-preload-968336 status is now: NodeReady
	  Normal  RegisteredNode           87s                node-controller  Node test-preload-968336 event: Registered Node test-preload-968336 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-968336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-968336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-968336 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                 node-controller  Node test-preload-968336 event: Registered Node test-preload-968336 in Controller
	
	
	==> dmesg <==
	[Sep20 18:42] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050468] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037920] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.774265] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.907618] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.348640] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep20 18:43] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.058726] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056599] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.176705] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.117801] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.271591] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +12.774027] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.062120] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.058991] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +5.722889] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.376906] systemd-fstab-generator[1819]: Ignoring "noauto" option for root device
	[  +6.148892] kauditd_printk_skb: 59 callbacks suppressed
	
	
	==> etcd [1a8f0a6480518de3809953bf366afc3f6a374624dff6219dd530d31df9766a19] <==
	{"level":"info","ts":"2024-09-20T18:43:21.544Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b2e12d85c3b1f69e","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-20T18:43:21.544Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-20T18:43:21.551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e switched to configuration voters=(12889633661048190622)"}
	{"level":"info","ts":"2024-09-20T18:43:21.551Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"38e4ac523bec2149","local-member-id":"b2e12d85c3b1f69e","added-peer-id":"b2e12d85c3b1f69e","added-peer-peer-urls":["https://192.168.39.205:2380"]}
	{"level":"info","ts":"2024-09-20T18:43:21.551Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"38e4ac523bec2149","local-member-id":"b2e12d85c3b1f69e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:43:21.551Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:43:21.565Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.205:2380"}
	{"level":"info","ts":"2024-09-20T18:43:21.565Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.205:2380"}
	{"level":"info","ts":"2024-09-20T18:43:21.561Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T18:43:21.566Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2e12d85c3b1f69e","initial-advertise-peer-urls":["https://192.168.39.205:2380"],"listen-peer-urls":["https://192.168.39.205:2380"],"advertise-client-urls":["https://192.168.39.205:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.205:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T18:43:21.566Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T18:43:23.196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T18:43:23.196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T18:43:23.196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e received MsgPreVoteResp from b2e12d85c3b1f69e at term 2"}
	{"level":"info","ts":"2024-09-20T18:43:23.196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T18:43:23.196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e received MsgVoteResp from b2e12d85c3b1f69e at term 3"}
	{"level":"info","ts":"2024-09-20T18:43:23.196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e became leader at term 3"}
	{"level":"info","ts":"2024-09-20T18:43:23.196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2e12d85c3b1f69e elected leader b2e12d85c3b1f69e at term 3"}
	{"level":"info","ts":"2024-09-20T18:43:23.202Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2e12d85c3b1f69e","local-member-attributes":"{Name:test-preload-968336 ClientURLs:[https://192.168.39.205:2379]}","request-path":"/0/members/b2e12d85c3b1f69e/attributes","cluster-id":"38e4ac523bec2149","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:43:23.202Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:43:23.204Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:43:23.204Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:43:23.204Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:43:23.205Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:43:23.206Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.205:2379"}
	
	
	==> kernel <==
	 18:43:42 up 0 min,  0 users,  load average: 0.61, 0.17, 0.06
	Linux test-preload-968336 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [03195db2eb0b9c2d35692dbc18f5e3193861e8412858c74d82ae11c8b3da1f5c] <==
	I0920 18:43:25.603662       1 controller.go:85] Starting OpenAPI V3 controller
	I0920 18:43:25.603721       1 naming_controller.go:291] Starting NamingConditionController
	I0920 18:43:25.603778       1 establishing_controller.go:76] Starting EstablishingController
	I0920 18:43:25.603830       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0920 18:43:25.603872       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0920 18:43:25.603909       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0920 18:43:25.691608       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0920 18:43:25.691699       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0920 18:43:25.696072       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0920 18:43:25.697645       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:43:25.704703       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:43:25.704870       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0920 18:43:25.707885       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0920 18:43:25.736775       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0920 18:43:25.767181       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 18:43:26.283656       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0920 18:43:26.595852       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 18:43:27.134098       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0920 18:43:27.154870       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0920 18:43:27.204917       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0920 18:43:27.226176       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 18:43:27.233903       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 18:43:27.558391       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0920 18:43:38.037217       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 18:43:38.066613       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0b1e2ad92b6f6394ff19111c24523c212050e6106dff7684dba5943641d57dc5] <==
	I0920 18:43:38.045745       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0920 18:43:38.049366       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0920 18:43:38.051754       1 shared_informer.go:262] Caches are synced for deployment
	I0920 18:43:38.052083       1 shared_informer.go:262] Caches are synced for endpoint
	I0920 18:43:38.053709       1 shared_informer.go:262] Caches are synced for stateful set
	I0920 18:43:38.055421       1 shared_informer.go:262] Caches are synced for cronjob
	I0920 18:43:38.069211       1 shared_informer.go:262] Caches are synced for ephemeral
	I0920 18:43:38.070519       1 shared_informer.go:262] Caches are synced for namespace
	I0920 18:43:38.071865       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0920 18:43:38.072937       1 shared_informer.go:262] Caches are synced for service account
	I0920 18:43:38.074550       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0920 18:43:38.084019       1 shared_informer.go:262] Caches are synced for daemon sets
	I0920 18:43:38.087840       1 shared_informer.go:262] Caches are synced for job
	I0920 18:43:38.095811       1 shared_informer.go:262] Caches are synced for disruption
	I0920 18:43:38.095926       1 disruption.go:371] Sending events to api server.
	I0920 18:43:38.162424       1 shared_informer.go:262] Caches are synced for persistent volume
	I0920 18:43:38.164804       1 shared_informer.go:262] Caches are synced for PV protection
	I0920 18:43:38.206188       1 shared_informer.go:262] Caches are synced for resource quota
	I0920 18:43:38.223834       1 shared_informer.go:262] Caches are synced for expand
	I0920 18:43:38.238302       1 shared_informer.go:262] Caches are synced for HPA
	I0920 18:43:38.244638       1 shared_informer.go:262] Caches are synced for attach detach
	I0920 18:43:38.279965       1 shared_informer.go:262] Caches are synced for resource quota
	I0920 18:43:38.725817       1 shared_informer.go:262] Caches are synced for garbage collector
	I0920 18:43:38.781682       1 shared_informer.go:262] Caches are synced for garbage collector
	I0920 18:43:38.781789       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [a16c8e9f94e9ddabb2fed9e570a529133d9b4097055dacf8c96871f72adcaa0c] <==
	I0920 18:43:27.508455       1 node.go:163] Successfully retrieved node IP: 192.168.39.205
	I0920 18:43:27.508543       1 server_others.go:138] "Detected node IP" address="192.168.39.205"
	I0920 18:43:27.508588       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0920 18:43:27.548063       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0920 18:43:27.548084       1 server_others.go:206] "Using iptables Proxier"
	I0920 18:43:27.549294       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0920 18:43:27.550220       1 server.go:661] "Version info" version="v1.24.4"
	I0920 18:43:27.550237       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:43:27.551600       1 config.go:317] "Starting service config controller"
	I0920 18:43:27.552197       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0920 18:43:27.552232       1 config.go:226] "Starting endpoint slice config controller"
	I0920 18:43:27.552237       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0920 18:43:27.553833       1 config.go:444] "Starting node config controller"
	I0920 18:43:27.553842       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0920 18:43:27.652459       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0920 18:43:27.652508       1 shared_informer.go:262] Caches are synced for service config
	I0920 18:43:27.654849       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [5730a01c5c820fd1aa15844046601298b0542294c7115922e6801e0d803ad101] <==
	I0920 18:43:22.117549       1 serving.go:348] Generated self-signed cert in-memory
	W0920 18:43:25.655558       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:43:25.657179       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:43:25.657284       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:43:25.657310       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:43:25.708924       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0920 18:43:25.708954       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:43:25.713605       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:43:25.713859       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:43:25.714179       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0920 18:43:25.714202       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0920 18:43:25.815379       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:43:26 test-preload-968336 kubelet[1133]: I0920 18:43:26.432005    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/267c68e5-d6c2-4bf7-915e-e8970b7c9323-config-volume\") pod \"coredns-6d4b75cb6d-xrndt\" (UID: \"267c68e5-d6c2-4bf7-915e-e8970b7c9323\") " pod="kube-system/coredns-6d4b75cb6d-xrndt"
	Sep 20 18:43:26 test-preload-968336 kubelet[1133]: I0920 18:43:26.432093    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8b601af3-dc4b-4b11-a089-2682b802698c-kube-proxy\") pod \"kube-proxy-4wflr\" (UID: \"8b601af3-dc4b-4b11-a089-2682b802698c\") " pod="kube-system/kube-proxy-4wflr"
	Sep 20 18:43:26 test-preload-968336 kubelet[1133]: I0920 18:43:26.432114    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b601af3-dc4b-4b11-a089-2682b802698c-xtables-lock\") pod \"kube-proxy-4wflr\" (UID: \"8b601af3-dc4b-4b11-a089-2682b802698c\") " pod="kube-system/kube-proxy-4wflr"
	Sep 20 18:43:26 test-preload-968336 kubelet[1133]: I0920 18:43:26.432171    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cc88aebd-df1d-443f-ac87-2008f728ba9e-tmp\") pod \"storage-provisioner\" (UID: \"cc88aebd-df1d-443f-ac87-2008f728ba9e\") " pod="kube-system/storage-provisioner"
	Sep 20 18:43:26 test-preload-968336 kubelet[1133]: I0920 18:43:26.432205    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg2zm\" (UniqueName: \"kubernetes.io/projected/cc88aebd-df1d-443f-ac87-2008f728ba9e-kube-api-access-qg2zm\") pod \"storage-provisioner\" (UID: \"cc88aebd-df1d-443f-ac87-2008f728ba9e\") " pod="kube-system/storage-provisioner"
	Sep 20 18:43:26 test-preload-968336 kubelet[1133]: I0920 18:43:26.432261    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b601af3-dc4b-4b11-a089-2682b802698c-lib-modules\") pod \"kube-proxy-4wflr\" (UID: \"8b601af3-dc4b-4b11-a089-2682b802698c\") " pod="kube-system/kube-proxy-4wflr"
	Sep 20 18:43:26 test-preload-968336 kubelet[1133]: I0920 18:43:26.432284    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g8v9\" (UniqueName: \"kubernetes.io/projected/8b601af3-dc4b-4b11-a089-2682b802698c-kube-api-access-7g8v9\") pod \"kube-proxy-4wflr\" (UID: \"8b601af3-dc4b-4b11-a089-2682b802698c\") " pod="kube-system/kube-proxy-4wflr"
	Sep 20 18:43:26 test-preload-968336 kubelet[1133]: I0920 18:43:26.432302    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llpt7\" (UniqueName: \"kubernetes.io/projected/267c68e5-d6c2-4bf7-915e-e8970b7c9323-kube-api-access-llpt7\") pod \"coredns-6d4b75cb6d-xrndt\" (UID: \"267c68e5-d6c2-4bf7-915e-e8970b7c9323\") " pod="kube-system/coredns-6d4b75cb6d-xrndt"
	Sep 20 18:43:26 test-preload-968336 kubelet[1133]: I0920 18:43:26.432362    1133 reconciler.go:159] "Reconciler: start to sync state"
	Sep 20 18:43:26 test-preload-968336 kubelet[1133]: I0920 18:43:26.490726    1133 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=fe84efda-dfda-444e-83b6-8025a1019c4c path="/var/lib/kubelet/pods/fe84efda-dfda-444e-83b6-8025a1019c4c/volumes"
	Sep 20 18:43:26 test-preload-968336 kubelet[1133]: E0920 18:43:26.537698    1133 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 20 18:43:26 test-preload-968336 kubelet[1133]: E0920 18:43:26.538278    1133 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/267c68e5-d6c2-4bf7-915e-e8970b7c9323-config-volume podName:267c68e5-d6c2-4bf7-915e-e8970b7c9323 nodeName:}" failed. No retries permitted until 2024-09-20 18:43:27.038219429 +0000 UTC m=+6.791449661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/267c68e5-d6c2-4bf7-915e-e8970b7c9323-config-volume") pod "coredns-6d4b75cb6d-xrndt" (UID: "267c68e5-d6c2-4bf7-915e-e8970b7c9323") : object "kube-system"/"coredns" not registered
	Sep 20 18:43:27 test-preload-968336 kubelet[1133]: E0920 18:43:27.041643    1133 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 20 18:43:27 test-preload-968336 kubelet[1133]: E0920 18:43:27.041716    1133 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/267c68e5-d6c2-4bf7-915e-e8970b7c9323-config-volume podName:267c68e5-d6c2-4bf7-915e-e8970b7c9323 nodeName:}" failed. No retries permitted until 2024-09-20 18:43:28.041701006 +0000 UTC m=+7.794931236 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/267c68e5-d6c2-4bf7-915e-e8970b7c9323-config-volume") pod "coredns-6d4b75cb6d-xrndt" (UID: "267c68e5-d6c2-4bf7-915e-e8970b7c9323") : object "kube-system"/"coredns" not registered
	Sep 20 18:43:27 test-preload-968336 kubelet[1133]: I0920 18:43:27.529019    1133 scope.go:110] "RemoveContainer" containerID="4efa013485e14fd1e5e24e775da801061e800610275e59ee51f1dd76f58f026f"
	Sep 20 18:43:28 test-preload-968336 kubelet[1133]: E0920 18:43:28.049171    1133 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 20 18:43:28 test-preload-968336 kubelet[1133]: E0920 18:43:28.049468    1133 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/267c68e5-d6c2-4bf7-915e-e8970b7c9323-config-volume podName:267c68e5-d6c2-4bf7-915e-e8970b7c9323 nodeName:}" failed. No retries permitted until 2024-09-20 18:43:30.049443797 +0000 UTC m=+9.802674046 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/267c68e5-d6c2-4bf7-915e-e8970b7c9323-config-volume") pod "coredns-6d4b75cb6d-xrndt" (UID: "267c68e5-d6c2-4bf7-915e-e8970b7c9323") : object "kube-system"/"coredns" not registered
	Sep 20 18:43:28 test-preload-968336 kubelet[1133]: E0920 18:43:28.484607    1133 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-xrndt" podUID=267c68e5-d6c2-4bf7-915e-e8970b7c9323
	Sep 20 18:43:28 test-preload-968336 kubelet[1133]: I0920 18:43:28.542313    1133 scope.go:110] "RemoveContainer" containerID="00af446faade18f8ee76f71c9bf56585cbb7fea668a849491ff363f5ddb3d189"
	Sep 20 18:43:28 test-preload-968336 kubelet[1133]: E0920 18:43:28.542522    1133 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cc88aebd-df1d-443f-ac87-2008f728ba9e)\"" pod="kube-system/storage-provisioner" podUID=cc88aebd-df1d-443f-ac87-2008f728ba9e
	Sep 20 18:43:28 test-preload-968336 kubelet[1133]: I0920 18:43:28.542583    1133 scope.go:110] "RemoveContainer" containerID="4efa013485e14fd1e5e24e775da801061e800610275e59ee51f1dd76f58f026f"
	Sep 20 18:43:29 test-preload-968336 kubelet[1133]: I0920 18:43:29.547700    1133 scope.go:110] "RemoveContainer" containerID="00af446faade18f8ee76f71c9bf56585cbb7fea668a849491ff363f5ddb3d189"
	Sep 20 18:43:29 test-preload-968336 kubelet[1133]: E0920 18:43:29.548670    1133 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cc88aebd-df1d-443f-ac87-2008f728ba9e)\"" pod="kube-system/storage-provisioner" podUID=cc88aebd-df1d-443f-ac87-2008f728ba9e
	Sep 20 18:43:30 test-preload-968336 kubelet[1133]: E0920 18:43:30.069524    1133 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 20 18:43:30 test-preload-968336 kubelet[1133]: E0920 18:43:30.069621    1133 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/267c68e5-d6c2-4bf7-915e-e8970b7c9323-config-volume podName:267c68e5-d6c2-4bf7-915e-e8970b7c9323 nodeName:}" failed. No retries permitted until 2024-09-20 18:43:34.069605716 +0000 UTC m=+13.822835945 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/267c68e5-d6c2-4bf7-915e-e8970b7c9323-config-volume") pod "coredns-6d4b75cb6d-xrndt" (UID: "267c68e5-d6c2-4bf7-915e-e8970b7c9323") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [00af446faade18f8ee76f71c9bf56585cbb7fea668a849491ff363f5ddb3d189] <==
	I0920 18:43:27.634210       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0920 18:43:27.635638       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-968336 -n test-preload-968336
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-968336 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-968336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-968336
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-968336: (1.166557781s)
--- FAIL: TestPreload (172.53s)

                                                
                                    
x
+
TestKubernetesUpgrade (404.36s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-149276 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-149276 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m52.515281449s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-149276] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-149276" primary control-plane node in "kubernetes-upgrade-149276" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:45:39.405844  280221 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:45:39.405981  280221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:45:39.406012  280221 out.go:358] Setting ErrFile to fd 2...
	I0920 18:45:39.406026  280221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:45:39.406260  280221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:45:39.406812  280221 out.go:352] Setting JSON to false
	I0920 18:45:39.407662  280221 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8882,"bootTime":1726849057,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:45:39.407757  280221 start.go:139] virtualization: kvm guest
	I0920 18:45:39.409992  280221 out.go:177] * [kubernetes-upgrade-149276] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:45:39.411518  280221 notify.go:220] Checking for updates...
	I0920 18:45:39.411587  280221 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:45:39.413960  280221 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:45:39.417980  280221 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:45:39.420302  280221 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:45:39.424507  280221 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:45:39.427061  280221 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:45:39.428693  280221 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:45:39.472833  280221 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:45:39.474071  280221 start.go:297] selected driver: kvm2
	I0920 18:45:39.474090  280221 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:45:39.474110  280221 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:45:39.474866  280221 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:45:39.491941  280221 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:45:39.515451  280221 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:45:39.515529  280221 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:45:39.515840  280221 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 18:45:39.515869  280221 cni.go:84] Creating CNI manager for ""
	I0920 18:45:39.515934  280221 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:45:39.515942  280221 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 18:45:39.516030  280221 start.go:340] cluster config:
	{Name:kubernetes-upgrade-149276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-149276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:45:39.516168  280221 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:45:39.518011  280221 out.go:177] * Starting "kubernetes-upgrade-149276" primary control-plane node in "kubernetes-upgrade-149276" cluster
	I0920 18:45:39.519137  280221 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:45:39.519184  280221 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 18:45:39.519201  280221 cache.go:56] Caching tarball of preloaded images
	I0920 18:45:39.519283  280221 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:45:39.519295  280221 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 18:45:39.519719  280221 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/config.json ...
	I0920 18:45:39.519750  280221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/config.json: {Name:mk43f9c7c0fe1eee61d6103fd0cfbd0cae357fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:45:39.519913  280221 start.go:360] acquireMachinesLock for kubernetes-upgrade-149276: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:46:00.554907  280221 start.go:364] duration metric: took 21.034949915s to acquireMachinesLock for "kubernetes-upgrade-149276"
	I0920 18:46:00.555003  280221 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-149276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-149276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:46:00.555112  280221 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:46:00.557710  280221 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:46:00.557931  280221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:46:00.557968  280221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:46:00.574921  280221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43333
	I0920 18:46:00.575487  280221 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:46:00.576031  280221 main.go:141] libmachine: Using API Version  1
	I0920 18:46:00.576056  280221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:46:00.576443  280221 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:46:00.576653  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetMachineName
	I0920 18:46:00.576794  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .DriverName
	I0920 18:46:00.576985  280221 start.go:159] libmachine.API.Create for "kubernetes-upgrade-149276" (driver="kvm2")
	I0920 18:46:00.577027  280221 client.go:168] LocalClient.Create starting
	I0920 18:46:00.577063  280221 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 18:46:00.577105  280221 main.go:141] libmachine: Decoding PEM data...
	I0920 18:46:00.577126  280221 main.go:141] libmachine: Parsing certificate...
	I0920 18:46:00.577195  280221 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 18:46:00.577220  280221 main.go:141] libmachine: Decoding PEM data...
	I0920 18:46:00.577235  280221 main.go:141] libmachine: Parsing certificate...
	I0920 18:46:00.577258  280221 main.go:141] libmachine: Running pre-create checks...
	I0920 18:46:00.577271  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .PreCreateCheck
	I0920 18:46:00.577585  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetConfigRaw
	I0920 18:46:00.578087  280221 main.go:141] libmachine: Creating machine...
	I0920 18:46:00.578103  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .Create
	I0920 18:46:00.578257  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Creating KVM machine...
	I0920 18:46:00.579607  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found existing default KVM network
	I0920 18:46:00.580634  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:00.580466  280565 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:7d:e8} reservation:<nil>}
	I0920 18:46:00.581338  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:00.581215  280565 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001238a0}
	I0920 18:46:00.581362  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | created network xml: 
	I0920 18:46:00.581374  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | <network>
	I0920 18:46:00.581383  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG |   <name>mk-kubernetes-upgrade-149276</name>
	I0920 18:46:00.581397  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG |   <dns enable='no'/>
	I0920 18:46:00.581446  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG |   
	I0920 18:46:00.581473  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0920 18:46:00.581494  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG |     <dhcp>
	I0920 18:46:00.581509  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0920 18:46:00.581522  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG |     </dhcp>
	I0920 18:46:00.581534  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG |   </ip>
	I0920 18:46:00.581544  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG |   
	I0920 18:46:00.581552  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | </network>
	I0920 18:46:00.581561  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | 
	I0920 18:46:00.588111  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | trying to create private KVM network mk-kubernetes-upgrade-149276 192.168.50.0/24...
	I0920 18:46:00.664344  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276 ...
	I0920 18:46:00.664379  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | private KVM network mk-kubernetes-upgrade-149276 192.168.50.0/24 created
	I0920 18:46:00.664389  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:46:00.664408  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:46:00.664502  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:00.664263  280565 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:46:00.916048  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:00.915887  280565 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/id_rsa...
	I0920 18:46:01.238471  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:01.238294  280565 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/kubernetes-upgrade-149276.rawdisk...
	I0920 18:46:01.238509  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Writing magic tar header
	I0920 18:46:01.238530  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Writing SSH key tar header
	I0920 18:46:01.238544  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:01.238473  280565 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276 ...
	I0920 18:46:01.238661  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276
	I0920 18:46:01.238709  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 18:46:01.238745  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276 (perms=drwx------)
	I0920 18:46:01.238761  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:46:01.238781  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 18:46:01.238794  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:46:01.238814  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:46:01.238833  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 18:46:01.238859  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 18:46:01.238873  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:46:01.238885  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:46:01.238896  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:46:01.238956  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Checking permissions on dir: /home
	I0920 18:46:01.239000  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Creating domain...
	I0920 18:46:01.239013  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Skipping /home - not owner
	I0920 18:46:01.240089  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) define libvirt domain using xml: 
	I0920 18:46:01.240112  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) <domain type='kvm'>
	I0920 18:46:01.240122  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)   <name>kubernetes-upgrade-149276</name>
	I0920 18:46:01.240130  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)   <memory unit='MiB'>2200</memory>
	I0920 18:46:01.240138  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)   <vcpu>2</vcpu>
	I0920 18:46:01.240154  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)   <features>
	I0920 18:46:01.240166  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     <acpi/>
	I0920 18:46:01.240176  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     <apic/>
	I0920 18:46:01.240190  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     <pae/>
	I0920 18:46:01.240199  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     
	I0920 18:46:01.240207  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)   </features>
	I0920 18:46:01.240217  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)   <cpu mode='host-passthrough'>
	I0920 18:46:01.240224  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)   
	I0920 18:46:01.240233  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)   </cpu>
	I0920 18:46:01.240240  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)   <os>
	I0920 18:46:01.240247  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     <type>hvm</type>
	I0920 18:46:01.240266  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     <boot dev='cdrom'/>
	I0920 18:46:01.240276  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     <boot dev='hd'/>
	I0920 18:46:01.240285  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     <bootmenu enable='no'/>
	I0920 18:46:01.240294  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)   </os>
	I0920 18:46:01.240302  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)   <devices>
	I0920 18:46:01.240312  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     <disk type='file' device='cdrom'>
	I0920 18:46:01.240327  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/boot2docker.iso'/>
	I0920 18:46:01.240340  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)       <target dev='hdc' bus='scsi'/>
	I0920 18:46:01.240351  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)       <readonly/>
	I0920 18:46:01.240366  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     </disk>
	I0920 18:46:01.240378  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     <disk type='file' device='disk'>
	I0920 18:46:01.240388  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:46:01.240404  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/kubernetes-upgrade-149276.rawdisk'/>
	I0920 18:46:01.240416  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)       <target dev='hda' bus='virtio'/>
	I0920 18:46:01.240440  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     </disk>
	I0920 18:46:01.240462  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     <interface type='network'>
	I0920 18:46:01.240472  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)       <source network='mk-kubernetes-upgrade-149276'/>
	I0920 18:46:01.240481  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)       <model type='virtio'/>
	I0920 18:46:01.240493  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     </interface>
	I0920 18:46:01.240504  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     <interface type='network'>
	I0920 18:46:01.240513  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)       <source network='default'/>
	I0920 18:46:01.240523  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)       <model type='virtio'/>
	I0920 18:46:01.240532  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     </interface>
	I0920 18:46:01.240542  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     <serial type='pty'>
	I0920 18:46:01.240555  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)       <target port='0'/>
	I0920 18:46:01.240562  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     </serial>
	I0920 18:46:01.240573  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     <console type='pty'>
	I0920 18:46:01.240589  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)       <target type='serial' port='0'/>
	I0920 18:46:01.240601  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     </console>
	I0920 18:46:01.240612  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     <rng model='virtio'>
	I0920 18:46:01.240625  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)       <backend model='random'>/dev/random</backend>
	I0920 18:46:01.240634  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     </rng>
	I0920 18:46:01.240642  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     
	I0920 18:46:01.240651  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)     
	I0920 18:46:01.240659  280221 main.go:141] libmachine: (kubernetes-upgrade-149276)   </devices>
	I0920 18:46:01.240668  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) </domain>
	I0920 18:46:01.240682  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) 
	I0920 18:46:01.248226  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:23:30:9c in network default
	I0920 18:46:01.249076  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Ensuring networks are active...
	I0920 18:46:01.249113  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:01.249984  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Ensuring network default is active
	I0920 18:46:01.250357  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Ensuring network mk-kubernetes-upgrade-149276 is active
	I0920 18:46:01.250991  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Getting domain xml...
	I0920 18:46:01.252127  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Creating domain...
	I0920 18:46:02.676608  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Waiting to get IP...
	I0920 18:46:02.677734  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:02.678323  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find current IP address of domain kubernetes-upgrade-149276 in network mk-kubernetes-upgrade-149276
	I0920 18:46:02.678356  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:02.678298  280565 retry.go:31] will retry after 268.844816ms: waiting for machine to come up
	I0920 18:46:02.949151  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:02.950148  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find current IP address of domain kubernetes-upgrade-149276 in network mk-kubernetes-upgrade-149276
	I0920 18:46:02.950174  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:02.950086  280565 retry.go:31] will retry after 253.356617ms: waiting for machine to come up
	I0920 18:46:03.205813  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:03.206380  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find current IP address of domain kubernetes-upgrade-149276 in network mk-kubernetes-upgrade-149276
	I0920 18:46:03.206408  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:03.206331  280565 retry.go:31] will retry after 416.910998ms: waiting for machine to come up
	I0920 18:46:03.625284  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:03.625747  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find current IP address of domain kubernetes-upgrade-149276 in network mk-kubernetes-upgrade-149276
	I0920 18:46:03.625779  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:03.625696  280565 retry.go:31] will retry after 518.974797ms: waiting for machine to come up
	I0920 18:46:04.146648  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:04.147185  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find current IP address of domain kubernetes-upgrade-149276 in network mk-kubernetes-upgrade-149276
	I0920 18:46:04.147215  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:04.147141  280565 retry.go:31] will retry after 570.26028ms: waiting for machine to come up
	I0920 18:46:04.719048  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:04.719658  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find current IP address of domain kubernetes-upgrade-149276 in network mk-kubernetes-upgrade-149276
	I0920 18:46:04.719695  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:04.719602  280565 retry.go:31] will retry after 627.615939ms: waiting for machine to come up
	I0920 18:46:05.349165  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:05.349956  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find current IP address of domain kubernetes-upgrade-149276 in network mk-kubernetes-upgrade-149276
	I0920 18:46:05.349981  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:05.349825  280565 retry.go:31] will retry after 1.10117154s: waiting for machine to come up
	I0920 18:46:06.452824  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:06.453398  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find current IP address of domain kubernetes-upgrade-149276 in network mk-kubernetes-upgrade-149276
	I0920 18:46:06.453491  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:06.453369  280565 retry.go:31] will retry after 1.478808674s: waiting for machine to come up
	I0920 18:46:07.934435  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:07.934867  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find current IP address of domain kubernetes-upgrade-149276 in network mk-kubernetes-upgrade-149276
	I0920 18:46:07.934889  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:07.934833  280565 retry.go:31] will retry after 1.125093333s: waiting for machine to come up
	I0920 18:46:09.062574  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:09.063101  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find current IP address of domain kubernetes-upgrade-149276 in network mk-kubernetes-upgrade-149276
	I0920 18:46:09.063128  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:09.063035  280565 retry.go:31] will retry after 2.317184447s: waiting for machine to come up
	I0920 18:46:11.382648  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:11.383087  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find current IP address of domain kubernetes-upgrade-149276 in network mk-kubernetes-upgrade-149276
	I0920 18:46:11.383119  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:11.383024  280565 retry.go:31] will retry after 2.420350923s: waiting for machine to come up
	I0920 18:46:13.806634  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:13.807208  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find current IP address of domain kubernetes-upgrade-149276 in network mk-kubernetes-upgrade-149276
	I0920 18:46:13.807234  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:13.807144  280565 retry.go:31] will retry after 3.612074596s: waiting for machine to come up
	I0920 18:46:17.421177  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:17.421683  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find current IP address of domain kubernetes-upgrade-149276 in network mk-kubernetes-upgrade-149276
	I0920 18:46:17.421718  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | I0920 18:46:17.421633  280565 retry.go:31] will retry after 4.077478318s: waiting for machine to come up
	I0920 18:46:21.500210  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:21.500680  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Found IP for machine: 192.168.50.65
	I0920 18:46:21.500719  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has current primary IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:21.500728  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Reserving static IP address...
	I0920 18:46:21.501217  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-149276", mac: "52:54:00:e7:14:32", ip: "192.168.50.65"} in network mk-kubernetes-upgrade-149276
	I0920 18:46:21.582478  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Getting to WaitForSSH function...
	I0920 18:46:21.582503  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Reserved static IP address: 192.168.50.65
	I0920 18:46:21.582526  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Waiting for SSH to be available...
	I0920 18:46:21.585424  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:21.585820  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276
	I0920 18:46:21.585851  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-149276 interface with MAC address 52:54:00:e7:14:32
	I0920 18:46:21.586047  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Using SSH client type: external
	I0920 18:46:21.586109  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/id_rsa (-rw-------)
	I0920 18:46:21.586184  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:46:21.586213  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | About to run SSH command:
	I0920 18:46:21.586234  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | exit 0
	I0920 18:46:21.590626  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | SSH cmd err, output: exit status 255: 
	I0920 18:46:21.590656  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0920 18:46:21.590667  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | command : exit 0
	I0920 18:46:21.590675  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | err     : exit status 255
	I0920 18:46:21.590684  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | output  : 
	I0920 18:46:24.590906  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Getting to WaitForSSH function...
	I0920 18:46:24.593465  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:24.593838  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:24.593871  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:24.594027  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Using SSH client type: external
	I0920 18:46:24.594054  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/id_rsa (-rw-------)
	I0920 18:46:24.594088  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:46:24.594101  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | About to run SSH command:
	I0920 18:46:24.594115  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | exit 0
	I0920 18:46:24.722362  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | SSH cmd err, output: <nil>: 
	I0920 18:46:24.722649  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) KVM machine creation complete!
	I0920 18:46:24.723177  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetConfigRaw
	I0920 18:46:24.723858  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .DriverName
	I0920 18:46:24.724084  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .DriverName
	I0920 18:46:24.724352  280221 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:46:24.724372  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetState
	I0920 18:46:24.726153  280221 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:46:24.726172  280221 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:46:24.726179  280221 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:46:24.726187  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:46:24.728921  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:24.729354  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:24.729385  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:24.729653  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:46:24.729862  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:24.730052  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:24.730205  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:46:24.730434  280221 main.go:141] libmachine: Using SSH client type: native
	I0920 18:46:24.730698  280221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0920 18:46:24.730717  280221 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:46:24.841538  280221 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:46:24.841569  280221 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:46:24.841581  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:46:24.844454  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:24.844838  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:24.844869  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:24.845081  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:46:24.845310  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:24.845527  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:24.845670  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:46:24.845829  280221 main.go:141] libmachine: Using SSH client type: native
	I0920 18:46:24.846072  280221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0920 18:46:24.846087  280221 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:46:24.954705  280221 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:46:24.954797  280221 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:46:24.954808  280221 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:46:24.954816  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetMachineName
	I0920 18:46:24.955087  280221 buildroot.go:166] provisioning hostname "kubernetes-upgrade-149276"
	I0920 18:46:24.955112  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetMachineName
	I0920 18:46:24.955305  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:46:24.958061  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:24.958429  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:24.958461  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:24.958583  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:46:24.958796  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:24.958955  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:24.959100  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:46:24.959292  280221 main.go:141] libmachine: Using SSH client type: native
	I0920 18:46:24.959525  280221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0920 18:46:24.959545  280221 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-149276 && echo "kubernetes-upgrade-149276" | sudo tee /etc/hostname
	I0920 18:46:25.082556  280221 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-149276
	
	I0920 18:46:25.082587  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:46:25.085798  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.086208  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:25.086240  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.086441  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:46:25.086652  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:25.086826  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:25.087012  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:46:25.087186  280221 main.go:141] libmachine: Using SSH client type: native
	I0920 18:46:25.087815  280221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0920 18:46:25.087859  280221 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-149276' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-149276/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-149276' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:46:25.206730  280221 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:46:25.206765  280221 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 18:46:25.206791  280221 buildroot.go:174] setting up certificates
	I0920 18:46:25.206806  280221 provision.go:84] configureAuth start
	I0920 18:46:25.206818  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetMachineName
	I0920 18:46:25.207115  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetIP
	I0920 18:46:25.209711  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.210188  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:25.210217  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.210374  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:46:25.212963  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.213449  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:25.213479  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.213593  280221 provision.go:143] copyHostCerts
	I0920 18:46:25.213697  280221 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 18:46:25.213719  280221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:46:25.213783  280221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 18:46:25.213893  280221 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 18:46:25.213922  280221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:46:25.213954  280221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 18:46:25.214038  280221 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 18:46:25.214050  280221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:46:25.214075  280221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 18:46:25.214136  280221 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-149276 san=[127.0.0.1 192.168.50.65 kubernetes-upgrade-149276 localhost minikube]
	I0920 18:46:25.292676  280221 provision.go:177] copyRemoteCerts
	I0920 18:46:25.292741  280221 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:46:25.292769  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:46:25.295710  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.296068  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:25.296103  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.296334  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:46:25.296544  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:25.296733  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:46:25.296861  280221 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/id_rsa Username:docker}
	I0920 18:46:25.388020  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:46:25.413754  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0920 18:46:25.438466  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:46:25.464271  280221 provision.go:87] duration metric: took 257.452026ms to configureAuth
	I0920 18:46:25.464303  280221 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:46:25.464491  280221 config.go:182] Loaded profile config "kubernetes-upgrade-149276": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:46:25.464579  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:46:25.468119  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.468545  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:25.468580  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.468840  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:46:25.469062  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:25.469260  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:25.469403  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:46:25.469567  280221 main.go:141] libmachine: Using SSH client type: native
	I0920 18:46:25.469765  280221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0920 18:46:25.469785  280221 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:46:25.708256  280221 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:46:25.708297  280221 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:46:25.708305  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetURL
	I0920 18:46:25.709582  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | Using libvirt version 6000000
	I0920 18:46:25.712194  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.712595  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:25.712624  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.712748  280221 main.go:141] libmachine: Docker is up and running!
	I0920 18:46:25.712761  280221 main.go:141] libmachine: Reticulating splines...
	I0920 18:46:25.712770  280221 client.go:171] duration metric: took 25.135734127s to LocalClient.Create
	I0920 18:46:25.712796  280221 start.go:167] duration metric: took 25.13581272s to libmachine.API.Create "kubernetes-upgrade-149276"
	I0920 18:46:25.712808  280221 start.go:293] postStartSetup for "kubernetes-upgrade-149276" (driver="kvm2")
	I0920 18:46:25.712822  280221 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:46:25.712847  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .DriverName
	I0920 18:46:25.713086  280221 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:46:25.713115  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:46:25.715306  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.715617  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:25.715641  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.715831  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:46:25.716076  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:25.716232  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:46:25.716352  280221 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/id_rsa Username:docker}
	I0920 18:46:25.801894  280221 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:46:25.806204  280221 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:46:25.806230  280221 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 18:46:25.806338  280221 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 18:46:25.806514  280221 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 18:46:25.806678  280221 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:46:25.817798  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:46:25.847661  280221 start.go:296] duration metric: took 134.83526ms for postStartSetup
	I0920 18:46:25.847724  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetConfigRaw
	I0920 18:46:25.848433  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetIP
	I0920 18:46:25.851549  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.851948  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:25.851979  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.852153  280221 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/config.json ...
	I0920 18:46:25.852378  280221 start.go:128] duration metric: took 25.29725398s to createHost
	I0920 18:46:25.852402  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:46:25.854845  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.855240  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:25.855265  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.855436  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:46:25.855619  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:25.855777  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:25.855952  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:46:25.856172  280221 main.go:141] libmachine: Using SSH client type: native
	I0920 18:46:25.856327  280221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0920 18:46:25.856336  280221 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:46:25.967227  280221 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726857985.946613610
	
	I0920 18:46:25.967253  280221 fix.go:216] guest clock: 1726857985.946613610
	I0920 18:46:25.967263  280221 fix.go:229] Guest: 2024-09-20 18:46:25.94661361 +0000 UTC Remote: 2024-09-20 18:46:25.852390336 +0000 UTC m=+46.495888293 (delta=94.223274ms)
	I0920 18:46:25.967305  280221 fix.go:200] guest clock delta is within tolerance: 94.223274ms
	I0920 18:46:25.967314  280221 start.go:83] releasing machines lock for "kubernetes-upgrade-149276", held for 25.412347681s
	I0920 18:46:25.967336  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .DriverName
	I0920 18:46:25.967631  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetIP
	I0920 18:46:25.970655  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.971050  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:25.971078  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.971291  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .DriverName
	I0920 18:46:25.972010  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .DriverName
	I0920 18:46:25.972268  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .DriverName
	I0920 18:46:25.972364  280221 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:46:25.972416  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:46:25.972515  280221 ssh_runner.go:195] Run: cat /version.json
	I0920 18:46:25.972536  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:46:25.975823  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.975855  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.976274  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:25.976298  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.976350  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:25.976370  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:25.976573  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:46:25.976597  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:46:25.976805  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:25.976811  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:46:25.976991  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:46:25.977005  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:46:25.977162  280221 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/id_rsa Username:docker}
	I0920 18:46:25.977223  280221 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/id_rsa Username:docker}
	I0920 18:46:26.104124  280221 ssh_runner.go:195] Run: systemctl --version
	I0920 18:46:26.112346  280221 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:46:26.279324  280221 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:46:26.287696  280221 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:46:26.287776  280221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:46:26.306975  280221 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:46:26.307005  280221 start.go:495] detecting cgroup driver to use...
	I0920 18:46:26.307083  280221 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:46:26.328143  280221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:46:26.347417  280221 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:46:26.347497  280221 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:46:26.364512  280221 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:46:26.379915  280221 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:46:26.545865  280221 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:46:26.741120  280221 docker.go:233] disabling docker service ...
	I0920 18:46:26.741190  280221 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:46:26.757039  280221 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:46:26.774296  280221 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:46:26.920632  280221 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:46:27.070963  280221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:46:27.086242  280221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:46:27.108205  280221 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 18:46:27.108264  280221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:46:27.121063  280221 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:46:27.121129  280221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:46:27.133211  280221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:46:27.147836  280221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:46:27.162379  280221 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:46:27.174650  280221 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:46:27.185742  280221 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:46:27.185810  280221 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:46:27.203014  280221 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:46:27.217536  280221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:46:27.378543  280221 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:46:27.481511  280221 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:46:27.481593  280221 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:46:27.487961  280221 start.go:563] Will wait 60s for crictl version
	I0920 18:46:27.488029  280221 ssh_runner.go:195] Run: which crictl
	I0920 18:46:27.492456  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:46:27.545651  280221 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:46:27.545731  280221 ssh_runner.go:195] Run: crio --version
	I0920 18:46:27.578908  280221 ssh_runner.go:195] Run: crio --version
	I0920 18:46:27.615612  280221 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 18:46:27.616811  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetIP
	I0920 18:46:27.620137  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:27.620522  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:46:15 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:46:27.620554  280221 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:46:27.620810  280221 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 18:46:27.625843  280221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:46:27.640945  280221 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-149276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-149276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:46:27.641071  280221 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:46:27.641121  280221 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:46:27.679527  280221 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:46:27.679607  280221 ssh_runner.go:195] Run: which lz4
	I0920 18:46:27.684001  280221 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:46:27.688504  280221 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:46:27.688554  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 18:46:29.177197  280221 crio.go:462] duration metric: took 1.493226989s to copy over tarball
	I0920 18:46:29.177274  280221 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:46:32.080747  280221 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.903435981s)
	I0920 18:46:32.080787  280221 crio.go:469] duration metric: took 2.903564086s to extract the tarball
	I0920 18:46:32.080797  280221 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:46:32.125139  280221 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:46:32.167170  280221 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:46:32.167196  280221 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:46:32.167301  280221 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:46:32.167284  280221 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:46:32.167369  280221 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:46:32.167365  280221 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 18:46:32.167268  280221 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:46:32.167272  280221 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:46:32.167382  280221 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:46:32.167344  280221 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 18:46:32.168767  280221 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 18:46:32.168907  280221 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:46:32.168955  280221 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:46:32.168992  280221 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:46:32.169031  280221 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:46:32.169051  280221 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:46:32.169090  280221 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 18:46:32.168967  280221 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:46:32.448288  280221 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 18:46:32.472005  280221 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:46:32.473368  280221 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:46:32.507341  280221 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 18:46:32.507392  280221 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 18:46:32.507440  280221 ssh_runner.go:195] Run: which crictl
	I0920 18:46:32.511398  280221 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:46:32.520814  280221 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 18:46:32.520920  280221 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:46:32.522765  280221 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 18:46:32.545346  280221 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 18:46:32.545416  280221 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:46:32.545479  280221 ssh_runner.go:195] Run: which crictl
	I0920 18:46:32.593981  280221 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 18:46:32.594017  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:46:32.594035  280221 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:46:32.594066  280221 ssh_runner.go:195] Run: which crictl
	I0920 18:46:32.678180  280221 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 18:46:32.678215  280221 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 18:46:32.678232  280221 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 18:46:32.678246  280221 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:46:32.678271  280221 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 18:46:32.678290  280221 ssh_runner.go:195] Run: which crictl
	I0920 18:46:32.678301  280221 ssh_runner.go:195] Run: which crictl
	I0920 18:46:32.678191  280221 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 18:46:32.678301  280221 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:46:32.678334  280221 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:46:32.678352  280221 ssh_runner.go:195] Run: which crictl
	I0920 18:46:32.678366  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:46:32.678386  280221 ssh_runner.go:195] Run: which crictl
	I0920 18:46:32.678421  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:46:32.678443  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:46:32.697495  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:46:32.697544  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:46:32.774857  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:46:32.774902  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:46:32.774961  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:46:32.783491  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:46:32.783617  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:46:32.827918  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:46:32.827928  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:46:32.956968  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:46:32.957071  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:46:32.957091  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:46:32.957071  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:46:32.957208  280221 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 18:46:32.966765  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:46:32.966771  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:46:33.091124  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:46:33.091128  280221 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 18:46:33.092132  280221 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 18:46:33.092156  280221 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 18:46:33.092138  280221 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:46:33.092257  280221 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 18:46:33.143928  280221 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 18:46:33.143973  280221 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 18:46:33.402524  280221 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:46:33.541578  280221 cache_images.go:92] duration metric: took 1.374362665s to LoadCachedImages
	W0920 18:46:33.541702  280221 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 18:46:33.541722  280221 kubeadm.go:934] updating node { 192.168.50.65 8443 v1.20.0 crio true true} ...
	I0920 18:46:33.541876  280221 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-149276 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-149276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:46:33.541979  280221 ssh_runner.go:195] Run: crio config
	I0920 18:46:33.587438  280221 cni.go:84] Creating CNI manager for ""
	I0920 18:46:33.587466  280221 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:46:33.587476  280221 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:46:33.587497  280221 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-149276 NodeName:kubernetes-upgrade-149276 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 18:46:33.587629  280221 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-149276"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:46:33.587697  280221 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 18:46:33.597569  280221 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:46:33.597655  280221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:46:33.607285  280221 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0920 18:46:33.624151  280221 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:46:33.641029  280221 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0920 18:46:33.658516  280221 ssh_runner.go:195] Run: grep 192.168.50.65	control-plane.minikube.internal$ /etc/hosts
	I0920 18:46:33.662611  280221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:46:33.675345  280221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:46:33.811339  280221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:46:33.828563  280221 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276 for IP: 192.168.50.65
	I0920 18:46:33.828605  280221 certs.go:194] generating shared ca certs ...
	I0920 18:46:33.828624  280221 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:46:33.828775  280221 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 18:46:33.828813  280221 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 18:46:33.828822  280221 certs.go:256] generating profile certs ...
	I0920 18:46:33.828875  280221 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/client.key
	I0920 18:46:33.828890  280221 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/client.crt with IP's: []
	I0920 18:46:34.227627  280221 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/client.crt ...
	I0920 18:46:34.227668  280221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/client.crt: {Name:mk7bd082c9e530f8b8ad35957530dfcab6545621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:46:34.227882  280221 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/client.key ...
	I0920 18:46:34.227903  280221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/client.key: {Name:mka65805ccee1aaaed06b435003c6057bd04cf8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:46:34.228016  280221 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/apiserver.key.e2a1fb56
	I0920 18:46:34.228044  280221 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/apiserver.crt.e2a1fb56 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.65]
	I0920 18:46:34.363002  280221 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/apiserver.crt.e2a1fb56 ...
	I0920 18:46:34.363030  280221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/apiserver.crt.e2a1fb56: {Name:mk88f99d10e210e7ca21ebb4bb7b558b6b4ddb71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:46:34.363204  280221 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/apiserver.key.e2a1fb56 ...
	I0920 18:46:34.363227  280221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/apiserver.key.e2a1fb56: {Name:mk875142149d1fdcd13aeabb734558f39aa88ae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:46:34.363324  280221 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/apiserver.crt.e2a1fb56 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/apiserver.crt
	I0920 18:46:34.363421  280221 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/apiserver.key.e2a1fb56 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/apiserver.key
	I0920 18:46:34.363496  280221 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/proxy-client.key
	I0920 18:46:34.363519  280221 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/proxy-client.crt with IP's: []
	I0920 18:46:34.569129  280221 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/proxy-client.crt ...
	I0920 18:46:34.569160  280221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/proxy-client.crt: {Name:mk4953bc385e1bf25ff53c5c4c1fbb15e6038396 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:46:34.569347  280221 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/proxy-client.key ...
	I0920 18:46:34.569366  280221 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/proxy-client.key: {Name:mkb2a80854e5d548ac260be1b4d45290b551ca52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:46:34.569591  280221 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 18:46:34.569639  280221 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 18:46:34.569655  280221 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:46:34.569684  280221 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:46:34.569721  280221 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:46:34.569755  280221 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 18:46:34.569807  280221 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:46:34.570404  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:46:34.597406  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:46:34.624763  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:46:34.648752  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:46:34.683457  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 18:46:34.711169  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:46:34.750707  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:46:34.774532  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:46:34.800947  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 18:46:34.825371  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:46:34.849588  280221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 18:46:34.873876  280221 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:46:34.890774  280221 ssh_runner.go:195] Run: openssl version
	I0920 18:46:34.896670  280221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 18:46:34.908066  280221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 18:46:34.913158  280221 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:46:34.913226  280221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 18:46:34.919461  280221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:46:34.930369  280221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:46:34.941834  280221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:46:34.946945  280221 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:46:34.947012  280221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:46:34.952892  280221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:46:34.964468  280221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 18:46:34.975708  280221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 18:46:34.980403  280221 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:46:34.980480  280221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 18:46:34.986255  280221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 18:46:34.997164  280221 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:46:35.001636  280221 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:46:35.001702  280221 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-149276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-149276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:46:35.001859  280221 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:46:35.001956  280221 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:46:35.037979  280221 cri.go:89] found id: ""
	I0920 18:46:35.038054  280221 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:46:35.048101  280221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:46:35.058524  280221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:46:35.068689  280221 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:46:35.068715  280221 kubeadm.go:157] found existing configuration files:
	
	I0920 18:46:35.068772  280221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:46:35.078233  280221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:46:35.078297  280221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:46:35.088412  280221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:46:35.098093  280221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:46:35.098161  280221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:46:35.108426  280221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:46:35.119195  280221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:46:35.119273  280221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:46:35.129218  280221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:46:35.138546  280221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:46:35.138626  280221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:46:35.148149  280221 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:46:35.260444  280221 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:46:35.260619  280221 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:46:35.413191  280221 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:46:35.413363  280221 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:46:35.413499  280221 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:46:35.637812  280221 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:46:35.772063  280221 out.go:235]   - Generating certificates and keys ...
	I0920 18:46:35.772196  280221 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:46:35.772284  280221 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:46:36.022105  280221 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:46:36.235553  280221 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:46:36.487770  280221 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:46:36.762192  280221 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:46:36.917238  280221 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:46:36.917439  280221 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-149276 localhost] and IPs [192.168.50.65 127.0.0.1 ::1]
	I0920 18:46:36.970022  280221 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:46:36.970214  280221 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-149276 localhost] and IPs [192.168.50.65 127.0.0.1 ::1]
	I0920 18:46:37.070376  280221 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:46:37.294357  280221 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:46:37.435277  280221 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:46:37.435370  280221 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:46:37.709372  280221 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:46:38.086468  280221 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:46:38.198105  280221 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:46:38.368468  280221 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:46:38.383955  280221 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:46:38.384510  280221 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:46:38.384577  280221 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:46:38.516046  280221 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:46:38.518492  280221 out.go:235]   - Booting up control plane ...
	I0920 18:46:38.518640  280221 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:46:38.531126  280221 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:46:38.532410  280221 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:46:38.533531  280221 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:46:38.539888  280221 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:47:18.535138  280221 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:47:18.536011  280221 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:47:18.536401  280221 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:47:23.536786  280221 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:47:23.537076  280221 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:47:33.536632  280221 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:47:33.536965  280221 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:47:53.536998  280221 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:47:53.537248  280221 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:48:33.539142  280221 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:48:33.539377  280221 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:48:33.539397  280221 kubeadm.go:310] 
	I0920 18:48:33.539437  280221 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:48:33.539473  280221 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:48:33.539479  280221 kubeadm.go:310] 
	I0920 18:48:33.539509  280221 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:48:33.539538  280221 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:48:33.539626  280221 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:48:33.539638  280221 kubeadm.go:310] 
	I0920 18:48:33.539756  280221 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:48:33.539806  280221 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:48:33.539851  280221 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:48:33.539862  280221 kubeadm.go:310] 
	I0920 18:48:33.540009  280221 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:48:33.540151  280221 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:48:33.540172  280221 kubeadm.go:310] 
	I0920 18:48:33.540322  280221 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:48:33.540458  280221 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:48:33.540578  280221 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:48:33.540681  280221 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:48:33.540707  280221 kubeadm.go:310] 
	I0920 18:48:33.541006  280221 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:48:33.541108  280221 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:48:33.541220  280221 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 18:48:33.541370  280221 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-149276 localhost] and IPs [192.168.50.65 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-149276 localhost] and IPs [192.168.50.65 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-149276 localhost] and IPs [192.168.50.65 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-149276 localhost] and IPs [192.168.50.65 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 18:48:33.541420  280221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:48:34.978549  280221 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.437095201s)
	I0920 18:48:34.978643  280221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:48:34.992817  280221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:48:35.002682  280221 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:48:35.002704  280221 kubeadm.go:157] found existing configuration files:
	
	I0920 18:48:35.002751  280221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:48:35.012165  280221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:48:35.012252  280221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:48:35.022146  280221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:48:35.031932  280221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:48:35.031991  280221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:48:35.041194  280221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:48:35.050519  280221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:48:35.050595  280221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:48:35.060336  280221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:48:35.069665  280221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:48:35.069738  280221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:48:35.078989  280221 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:48:35.286514  280221 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:50:31.210882  280221 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:50:31.210992  280221 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:50:31.212386  280221 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:50:31.212484  280221 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:50:31.212588  280221 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:50:31.212696  280221 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:50:31.212831  280221 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:50:31.212936  280221 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:50:31.215104  280221 out.go:235]   - Generating certificates and keys ...
	I0920 18:50:31.215231  280221 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:50:31.215342  280221 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:50:31.215466  280221 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:50:31.215576  280221 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:50:31.215674  280221 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:50:31.215722  280221 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:50:31.215790  280221 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:50:31.215848  280221 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:50:31.215929  280221 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:50:31.216030  280221 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:50:31.216095  280221 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:50:31.216148  280221 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:50:31.216229  280221 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:50:31.216309  280221 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:50:31.216403  280221 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:50:31.216490  280221 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:50:31.216576  280221 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:50:31.216652  280221 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:50:31.216693  280221 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:50:31.216757  280221 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:50:31.218310  280221 out.go:235]   - Booting up control plane ...
	I0920 18:50:31.218418  280221 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:50:31.218528  280221 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:50:31.218611  280221 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:50:31.218699  280221 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:50:31.218899  280221 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:50:31.218981  280221 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:50:31.219070  280221 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:50:31.219267  280221 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:50:31.219383  280221 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:50:31.219711  280221 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:50:31.219818  280221 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:50:31.220046  280221 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:50:31.220136  280221 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:50:31.220375  280221 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:50:31.220461  280221 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:50:31.220717  280221 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:50:31.220724  280221 kubeadm.go:310] 
	I0920 18:50:31.220771  280221 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:50:31.220821  280221 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:50:31.220827  280221 kubeadm.go:310] 
	I0920 18:50:31.220878  280221 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:50:31.220936  280221 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:50:31.221072  280221 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:50:31.221088  280221 kubeadm.go:310] 
	I0920 18:50:31.221179  280221 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:50:31.221213  280221 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:50:31.221251  280221 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:50:31.221261  280221 kubeadm.go:310] 
	I0920 18:50:31.221408  280221 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:50:31.221522  280221 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:50:31.221532  280221 kubeadm.go:310] 
	I0920 18:50:31.221659  280221 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:50:31.221738  280221 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:50:31.221819  280221 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:50:31.221924  280221 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:50:31.222013  280221 kubeadm.go:310] 
	I0920 18:50:31.222028  280221 kubeadm.go:394] duration metric: took 3m56.220332212s to StartCluster
	I0920 18:50:31.222078  280221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:50:31.222148  280221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:50:31.274010  280221 cri.go:89] found id: ""
	I0920 18:50:31.274042  280221 logs.go:276] 0 containers: []
	W0920 18:50:31.274053  280221 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:50:31.274062  280221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:50:31.274138  280221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:50:31.310634  280221 cri.go:89] found id: ""
	I0920 18:50:31.310672  280221 logs.go:276] 0 containers: []
	W0920 18:50:31.310696  280221 logs.go:278] No container was found matching "etcd"
	I0920 18:50:31.310705  280221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:50:31.310779  280221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:50:31.345002  280221 cri.go:89] found id: ""
	I0920 18:50:31.345035  280221 logs.go:276] 0 containers: []
	W0920 18:50:31.345044  280221 logs.go:278] No container was found matching "coredns"
	I0920 18:50:31.345050  280221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:50:31.345102  280221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:50:31.391670  280221 cri.go:89] found id: ""
	I0920 18:50:31.391704  280221 logs.go:276] 0 containers: []
	W0920 18:50:31.391715  280221 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:50:31.391725  280221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:50:31.391787  280221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:50:31.431651  280221 cri.go:89] found id: ""
	I0920 18:50:31.431686  280221 logs.go:276] 0 containers: []
	W0920 18:50:31.431704  280221 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:50:31.431713  280221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:50:31.431778  280221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:50:31.465313  280221 cri.go:89] found id: ""
	I0920 18:50:31.465345  280221 logs.go:276] 0 containers: []
	W0920 18:50:31.465354  280221 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:50:31.465367  280221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:50:31.465432  280221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:50:31.498231  280221 cri.go:89] found id: ""
	I0920 18:50:31.498268  280221 logs.go:276] 0 containers: []
	W0920 18:50:31.498276  280221 logs.go:278] No container was found matching "kindnet"
	I0920 18:50:31.498287  280221 logs.go:123] Gathering logs for kubelet ...
	I0920 18:50:31.498298  280221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:50:31.552332  280221 logs.go:123] Gathering logs for dmesg ...
	I0920 18:50:31.552378  280221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:50:31.567807  280221 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:50:31.567841  280221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:50:31.694440  280221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:50:31.694471  280221 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:50:31.694491  280221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:50:31.805476  280221 logs.go:123] Gathering logs for container status ...
	I0920 18:50:31.805523  280221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0920 18:50:31.852019  280221 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 18:50:31.852100  280221 out.go:270] * 
	* 
	W0920 18:50:31.852173  280221 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:50:31.852195  280221 out.go:270] * 
	* 
	W0920 18:50:31.853379  280221 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:50:31.857266  280221 out.go:201] 
	W0920 18:50:31.858944  280221 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:50:31.859017  280221 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 18:50:31.859053  280221 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 18:50:31.860798  280221 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-149276 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-149276
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-149276: (6.305176516s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-149276 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-149276 status --format={{.Host}}: exit status 7 (72.110764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-149276 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-149276 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.207112349s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-149276 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-149276 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-149276 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (91.620836ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-149276] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-149276
	    minikube start -p kubernetes-upgrade-149276 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1492762 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-149276 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-149276 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-149276 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.360110638s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-20 18:52:20.044139703 +0000 UTC m=+4602.704586363
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-149276 -n kubernetes-upgrade-149276
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-149276 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-149276 logs -n 25: (1.779833253s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-901769             | minikube                  | jenkins | v1.26.0 | 20 Sep 24 18:48 UTC | 20 Sep 24 18:49 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-115059 sudo           | NoKubernetes-115059       | jenkins | v1.34.0 | 20 Sep 24 18:48 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-115059                | NoKubernetes-115059       | jenkins | v1.34.0 | 20 Sep 24 18:48 UTC | 20 Sep 24 18:48 UTC |
	| start   | -p NoKubernetes-115059                | NoKubernetes-115059       | jenkins | v1.34.0 | 20 Sep 24 18:48 UTC | 20 Sep 24 18:49 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-115059 sudo           | NoKubernetes-115059       | jenkins | v1.34.0 | 20 Sep 24 18:49 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-115059                | NoKubernetes-115059       | jenkins | v1.34.0 | 20 Sep 24 18:49 UTC | 20 Sep 24 18:49 UTC |
	| start   | -p stopped-upgrade-108885             | minikube                  | jenkins | v1.26.0 | 20 Sep 24 18:49 UTC | 20 Sep 24 18:50 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p running-upgrade-901769             | running-upgrade-901769    | jenkins | v1.34.0 | 20 Sep 24 18:49 UTC | 20 Sep 24 18:51 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-108885 stop           | minikube                  | jenkins | v1.26.0 | 20 Sep 24 18:50 UTC | 20 Sep 24 18:50 UTC |
	| start   | -p stopped-upgrade-108885             | stopped-upgrade-108885    | jenkins | v1.34.0 | 20 Sep 24 18:50 UTC | 20 Sep 24 18:51 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-149276          | kubernetes-upgrade-149276 | jenkins | v1.34.0 | 20 Sep 24 18:50 UTC | 20 Sep 24 18:50 UTC |
	| start   | -p kubernetes-upgrade-149276          | kubernetes-upgrade-149276 | jenkins | v1.34.0 | 20 Sep 24 18:50 UTC | 20 Sep 24 18:51 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-648841             | cert-expiration-648841    | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-108885             | stopped-upgrade-108885    | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	| delete  | -p running-upgrade-901769             | running-upgrade-901769    | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	| start   | -p cert-options-178420                | cert-options-178420       | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-554447 --memory=2048         | pause-554447              | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-149276          | kubernetes-upgrade-149276 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-149276          | kubernetes-upgrade-149276 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:52 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-648841             | cert-expiration-648841    | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	| start   | -p auto-793540 --memory=3072          | auto-793540               | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-178420 ssh               | cert-options-178420       | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-178420 -- sudo        | cert-options-178420       | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-178420                | cert-options-178420       | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	| start   | -p kindnet-793540                     | kindnet-793540            | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:51:51
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:51:51.652970  287895 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:51:51.653137  287895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:51:51.653149  287895 out.go:358] Setting ErrFile to fd 2...
	I0920 18:51:51.653156  287895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:51:51.653356  287895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:51:51.654070  287895 out.go:352] Setting JSON to false
	I0920 18:51:51.655080  287895 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9255,"bootTime":1726849057,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:51:51.655202  287895 start.go:139] virtualization: kvm guest
	I0920 18:51:51.657869  287895 out.go:177] * [kindnet-793540] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:51:51.659509  287895 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:51:51.659537  287895 notify.go:220] Checking for updates...
	I0920 18:51:51.662363  287895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:51:51.663885  287895 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:51:51.665540  287895 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:51:51.667125  287895 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:51:51.668864  287895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:51:51.671069  287895 config.go:182] Loaded profile config "auto-793540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:51:51.671224  287895 config.go:182] Loaded profile config "kubernetes-upgrade-149276": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:51:51.671394  287895 config.go:182] Loaded profile config "pause-554447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:51:51.671541  287895 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:51:51.712300  287895 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:51:51.713641  287895 start.go:297] selected driver: kvm2
	I0920 18:51:51.713664  287895 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:51:51.713691  287895 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:51:51.714505  287895 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:51:51.714627  287895 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:51:51.735392  287895 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:51:51.735449  287895 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:51:51.735678  287895 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:51:51.735712  287895 cni.go:84] Creating CNI manager for "kindnet"
	I0920 18:51:51.735717  287895 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:51:51.735766  287895 start.go:340] cluster config:
	{Name:kindnet-793540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-793540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:51:51.735878  287895 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:51:51.738015  287895 out.go:177] * Starting "kindnet-793540" primary control-plane node in "kindnet-793540" cluster
	I0920 18:51:51.043023  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:51.066888  287153 main.go:141] libmachine: (pause-554447) DBG | unable to find current IP address of domain pause-554447 in network mk-pause-554447
	I0920 18:51:51.066913  287153 main.go:141] libmachine: (pause-554447) DBG | I0920 18:51:51.066792  287390 retry.go:31] will retry after 5.299635497s: waiting for machine to come up
	I0920 18:51:51.739257  287895 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:51:51.739297  287895 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:51:51.739304  287895 cache.go:56] Caching tarball of preloaded images
	I0920 18:51:51.739379  287895 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:51:51.739390  287895 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:51:51.739495  287895 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/config.json ...
	I0920 18:51:51.739517  287895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/config.json: {Name:mk2a8609fa0c7f931c07e186f2de277842f40f3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:51:51.739661  287895 start.go:360] acquireMachinesLock for kindnet-793540: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:51:58.139065  287271 start.go:364] duration metric: took 41.294497082s to acquireMachinesLock for "kubernetes-upgrade-149276"
	I0920 18:51:58.139145  287271 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:51:58.139152  287271 fix.go:54] fixHost starting: 
	I0920 18:51:58.139560  287271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:51:58.139599  287271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:51:58.160296  287271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0920 18:51:58.160768  287271 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:51:58.161336  287271 main.go:141] libmachine: Using API Version  1
	I0920 18:51:58.161367  287271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:51:58.161761  287271 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:51:58.162057  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .DriverName
	I0920 18:51:58.162209  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetState
	I0920 18:51:58.164097  287271 fix.go:112] recreateIfNeeded on kubernetes-upgrade-149276: state=Running err=<nil>
	W0920 18:51:58.164120  287271 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:51:58.166392  287271 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-149276" VM ...
	I0920 18:51:56.368011  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:56.368877  287153 main.go:141] libmachine: (pause-554447) Found IP for machine: 192.168.61.38
	I0920 18:51:56.368906  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has current primary IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:56.368912  287153 main.go:141] libmachine: (pause-554447) Reserving static IP address...
	I0920 18:51:56.369366  287153 main.go:141] libmachine: (pause-554447) DBG | unable to find host DHCP lease matching {name: "pause-554447", mac: "52:54:00:2c:66:a0", ip: "192.168.61.38"} in network mk-pause-554447
	I0920 18:51:56.456773  287153 main.go:141] libmachine: (pause-554447) DBG | Getting to WaitForSSH function...
	I0920 18:51:56.456797  287153 main.go:141] libmachine: (pause-554447) Reserved static IP address: 192.168.61.38
	I0920 18:51:56.456807  287153 main.go:141] libmachine: (pause-554447) Waiting for SSH to be available...
	I0920 18:51:56.459637  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:56.460167  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:56.460185  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:56.460400  287153 main.go:141] libmachine: (pause-554447) DBG | Using SSH client type: external
	I0920 18:51:56.460421  287153 main.go:141] libmachine: (pause-554447) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/pause-554447/id_rsa (-rw-------)
	I0920 18:51:56.460444  287153 main.go:141] libmachine: (pause-554447) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/pause-554447/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:51:56.460456  287153 main.go:141] libmachine: (pause-554447) DBG | About to run SSH command:
	I0920 18:51:56.460466  287153 main.go:141] libmachine: (pause-554447) DBG | exit 0
	I0920 18:51:56.590511  287153 main.go:141] libmachine: (pause-554447) DBG | SSH cmd err, output: <nil>: 
	I0920 18:51:56.590899  287153 main.go:141] libmachine: (pause-554447) KVM machine creation complete!
	I0920 18:51:56.591174  287153 main.go:141] libmachine: (pause-554447) Calling .GetConfigRaw
	I0920 18:51:56.591842  287153 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:51:56.592066  287153 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:51:56.592249  287153 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:51:56.592258  287153 main.go:141] libmachine: (pause-554447) Calling .GetState
	I0920 18:51:56.593632  287153 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:51:56.593640  287153 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:51:56.593643  287153 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:51:56.593648  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:51:56.596004  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:56.596379  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:56.596416  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:56.596533  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:51:56.596732  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:56.596840  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:56.596969  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:51:56.597067  287153 main.go:141] libmachine: Using SSH client type: native
	I0920 18:51:56.597246  287153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0920 18:51:56.597251  287153 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:51:56.705338  287153 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:51:56.705352  287153 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:51:56.705358  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:51:56.708396  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:56.708739  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:56.708758  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:56.708965  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:51:56.709237  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:56.709436  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:56.709585  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:51:56.709820  287153 main.go:141] libmachine: Using SSH client type: native
	I0920 18:51:56.710040  287153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0920 18:51:56.710046  287153 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:51:56.822445  287153 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:51:56.822499  287153 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:51:56.822504  287153 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:51:56.822510  287153 main.go:141] libmachine: (pause-554447) Calling .GetMachineName
	I0920 18:51:56.822798  287153 buildroot.go:166] provisioning hostname "pause-554447"
	I0920 18:51:56.822822  287153 main.go:141] libmachine: (pause-554447) Calling .GetMachineName
	I0920 18:51:56.823127  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:51:56.825786  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:56.826226  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:56.826245  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:56.826418  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:51:56.826612  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:56.826782  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:56.826924  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:51:56.827099  287153 main.go:141] libmachine: Using SSH client type: native
	I0920 18:51:56.827411  287153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0920 18:51:56.827420  287153 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-554447 && echo "pause-554447" | sudo tee /etc/hostname
	I0920 18:51:56.956655  287153 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-554447
	
	I0920 18:51:56.956670  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:51:56.959629  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:56.959952  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:56.959980  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:56.960196  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:51:56.960384  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:56.960523  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:56.960616  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:51:56.960734  287153 main.go:141] libmachine: Using SSH client type: native
	I0920 18:51:56.960888  287153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0920 18:51:56.960897  287153 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-554447' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-554447/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-554447' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:51:57.082252  287153 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:51:57.082275  287153 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 18:51:57.082316  287153 buildroot.go:174] setting up certificates
	I0920 18:51:57.082325  287153 provision.go:84] configureAuth start
	I0920 18:51:57.082333  287153 main.go:141] libmachine: (pause-554447) Calling .GetMachineName
	I0920 18:51:57.082637  287153 main.go:141] libmachine: (pause-554447) Calling .GetIP
	I0920 18:51:57.085770  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:57.086131  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:57.086154  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:57.086308  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:51:57.088713  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:57.088997  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:57.089021  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:57.089191  287153 provision.go:143] copyHostCerts
	I0920 18:51:57.089243  287153 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 18:51:57.089259  287153 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:51:57.089314  287153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 18:51:57.089402  287153 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 18:51:57.089406  287153 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:51:57.089423  287153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 18:51:57.089470  287153 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 18:51:57.089473  287153 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:51:57.089488  287153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 18:51:57.089528  287153 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.pause-554447 san=[127.0.0.1 192.168.61.38 localhost minikube pause-554447]
	I0920 18:51:57.484653  287153 provision.go:177] copyRemoteCerts
	I0920 18:51:57.484732  287153 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:51:57.484757  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:51:57.488009  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:57.488364  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:57.488403  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:57.488628  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:51:57.488850  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:57.489008  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:51:57.489200  287153 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/pause-554447/id_rsa Username:docker}
	I0920 18:51:57.576549  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:51:57.600829  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0920 18:51:57.624015  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:51:57.648149  287153 provision.go:87] duration metric: took 565.808987ms to configureAuth
	I0920 18:51:57.648170  287153 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:51:57.648348  287153 config.go:182] Loaded profile config "pause-554447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:51:57.648416  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:51:57.651452  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:57.651868  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:57.651894  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:57.652058  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:51:57.652295  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:57.652465  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:57.652716  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:51:57.652892  287153 main.go:141] libmachine: Using SSH client type: native
	I0920 18:51:57.653071  287153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0920 18:51:57.653079  287153 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:51:57.883450  287153 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:51:57.883464  287153 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:51:57.883471  287153 main.go:141] libmachine: (pause-554447) Calling .GetURL
	I0920 18:51:57.884987  287153 main.go:141] libmachine: (pause-554447) DBG | Using libvirt version 6000000
	I0920 18:51:57.887691  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:57.888046  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:57.888074  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:57.888325  287153 main.go:141] libmachine: Docker is up and running!
	I0920 18:51:57.888334  287153 main.go:141] libmachine: Reticulating splines...
	I0920 18:51:57.888341  287153 client.go:171] duration metric: took 26.549820951s to LocalClient.Create
	I0920 18:51:57.888366  287153 start.go:167] duration metric: took 26.549892727s to libmachine.API.Create "pause-554447"
	I0920 18:51:57.888374  287153 start.go:293] postStartSetup for "pause-554447" (driver="kvm2")
	I0920 18:51:57.888385  287153 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:51:57.888404  287153 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:51:57.888669  287153 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:51:57.888690  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:51:57.891216  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:57.891578  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:57.891597  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:57.891824  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:51:57.892019  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:57.892156  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:51:57.892277  287153 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/pause-554447/id_rsa Username:docker}
	I0920 18:51:57.976734  287153 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:51:57.980980  287153 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:51:57.981000  287153 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 18:51:57.981087  287153 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 18:51:57.981185  287153 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 18:51:57.981304  287153 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:51:57.991483  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:51:58.018742  287153 start.go:296] duration metric: took 130.352292ms for postStartSetup
	I0920 18:51:58.018788  287153 main.go:141] libmachine: (pause-554447) Calling .GetConfigRaw
	I0920 18:51:58.019425  287153 main.go:141] libmachine: (pause-554447) Calling .GetIP
	I0920 18:51:58.022796  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:58.023282  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:58.023332  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:58.023606  287153 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/config.json ...
	I0920 18:51:58.023852  287153 start.go:128] duration metric: took 26.712534045s to createHost
	I0920 18:51:58.023874  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:51:58.026368  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:58.026758  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:58.026787  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:58.026963  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:51:58.027193  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:58.027369  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:58.027562  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:51:58.027704  287153 main.go:141] libmachine: Using SSH client type: native
	I0920 18:51:58.027902  287153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0920 18:51:58.027909  287153 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:51:58.138923  287153 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858318.126735500
	
	I0920 18:51:58.138935  287153 fix.go:216] guest clock: 1726858318.126735500
	I0920 18:51:58.138942  287153 fix.go:229] Guest: 2024-09-20 18:51:58.1267355 +0000 UTC Remote: 2024-09-20 18:51:58.023858843 +0000 UTC m=+44.561345180 (delta=102.876657ms)
	I0920 18:51:58.138966  287153 fix.go:200] guest clock delta is within tolerance: 102.876657ms
	I0920 18:51:58.138972  287153 start.go:83] releasing machines lock for "pause-554447", held for 26.82783209s
	I0920 18:51:58.139001  287153 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:51:58.139356  287153 main.go:141] libmachine: (pause-554447) Calling .GetIP
	I0920 18:51:58.142314  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:58.142647  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:58.142673  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:58.142907  287153 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:51:58.143478  287153 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:51:58.143663  287153 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:51:58.143740  287153 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:51:58.143788  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:51:58.143888  287153 ssh_runner.go:195] Run: cat /version.json
	I0920 18:51:58.143904  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:51:58.146861  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:58.146892  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:58.147315  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:58.147335  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:58.147362  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:58.147371  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:58.147519  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:51:58.147654  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:51:58.147715  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:58.147777  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:51:58.147867  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:51:58.148003  287153 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/pause-554447/id_rsa Username:docker}
	I0920 18:51:58.148055  287153 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:51:58.148210  287153 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/pause-554447/id_rsa Username:docker}
	I0920 18:51:58.268248  287153 ssh_runner.go:195] Run: systemctl --version
	I0920 18:51:58.274705  287153 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:51:58.443667  287153 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:51:58.450318  287153 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:51:58.450371  287153 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:51:58.468977  287153 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:51:58.468993  287153 start.go:495] detecting cgroup driver to use...
	I0920 18:51:58.469069  287153 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:51:58.486993  287153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:51:58.500782  287153 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:51:58.500841  287153 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:51:58.514861  287153 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:51:58.528919  287153 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:51:58.646990  287153 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:51:58.796784  287153 docker.go:233] disabling docker service ...
	I0920 18:51:58.796858  287153 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:51:58.811388  287153 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:51:58.826597  287153 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:51:58.963179  287153 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:51:59.109515  287153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:51:59.125497  287153 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:51:59.145305  287153 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:51:59.145353  287153 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:51:59.155890  287153 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:51:59.155950  287153 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:51:59.166296  287153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:51:59.176768  287153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:51:59.187720  287153 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:51:59.198770  287153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:51:59.209736  287153 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:51:59.227551  287153 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:51:59.237616  287153 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:51:59.246817  287153 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:51:59.246883  287153 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:51:59.260307  287153 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:51:59.270360  287153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:51:59.402906  287153 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:51:59.491846  287153 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:51:59.491948  287153 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:51:59.496477  287153 start.go:563] Will wait 60s for crictl version
	I0920 18:51:59.496540  287153 ssh_runner.go:195] Run: which crictl
	I0920 18:51:59.500254  287153 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:51:59.538041  287153 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:51:59.538124  287153 ssh_runner.go:195] Run: crio --version
	I0920 18:51:59.564950  287153 ssh_runner.go:195] Run: crio --version
	I0920 18:51:59.595113  287153 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:51:58.167789  287271 machine.go:93] provisionDockerMachine start ...
	I0920 18:51:58.167823  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .DriverName
	I0920 18:51:58.168113  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:51:58.171057  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:51:58.171513  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:50:51 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:51:58.171536  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:51:58.171719  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:51:58.171935  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:51:58.172124  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:51:58.172250  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:51:58.172373  287271 main.go:141] libmachine: Using SSH client type: native
	I0920 18:51:58.172569  287271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0920 18:51:58.172584  287271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:51:58.287264  287271 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-149276
	
	I0920 18:51:58.287293  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetMachineName
	I0920 18:51:58.287655  287271 buildroot.go:166] provisioning hostname "kubernetes-upgrade-149276"
	I0920 18:51:58.287693  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetMachineName
	I0920 18:51:58.287911  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:51:58.291190  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:51:58.291622  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:50:51 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:51:58.291650  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:51:58.291942  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:51:58.292198  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:51:58.292387  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:51:58.292592  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:51:58.292800  287271 main.go:141] libmachine: Using SSH client type: native
	I0920 18:51:58.293068  287271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0920 18:51:58.293089  287271 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-149276 && echo "kubernetes-upgrade-149276" | sudo tee /etc/hostname
	I0920 18:51:58.425767  287271 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-149276
	
	I0920 18:51:58.425807  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:51:58.428913  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:51:58.429252  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:50:51 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:51:58.429290  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:51:58.429459  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:51:58.429676  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:51:58.429923  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:51:58.430097  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:51:58.430323  287271 main.go:141] libmachine: Using SSH client type: native
	I0920 18:51:58.430506  287271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0920 18:51:58.430523  287271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-149276' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-149276/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-149276' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:51:58.549135  287271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:51:58.549176  287271 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 18:51:58.549242  287271 buildroot.go:174] setting up certificates
	I0920 18:51:58.549257  287271 provision.go:84] configureAuth start
	I0920 18:51:58.549276  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetMachineName
	I0920 18:51:58.549685  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetIP
	I0920 18:51:58.553167  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:51:58.553671  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:50:51 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:51:58.553701  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:51:58.553931  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:51:58.556852  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:51:58.557278  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:50:51 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:51:58.557307  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:51:58.557609  287271 provision.go:143] copyHostCerts
	I0920 18:51:58.557681  287271 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 18:51:58.557704  287271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:51:58.557776  287271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 18:51:58.557875  287271 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 18:51:58.557884  287271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:51:58.557928  287271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 18:51:58.557996  287271 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 18:51:58.558011  287271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:51:58.558031  287271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 18:51:58.558085  287271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-149276 san=[127.0.0.1 192.168.50.65 kubernetes-upgrade-149276 localhost minikube]
	I0920 18:51:58.730478  287271 provision.go:177] copyRemoteCerts
	I0920 18:51:58.730557  287271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:51:58.730588  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:51:58.734349  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:51:58.734756  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:50:51 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:51:58.734813  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:51:58.735007  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:51:58.735286  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:51:58.735459  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:51:58.735645  287271 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/id_rsa Username:docker}
	I0920 18:51:58.828549  287271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0920 18:51:58.855134  287271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:51:58.881898  287271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:51:58.910173  287271 provision.go:87] duration metric: took 360.898211ms to configureAuth
	I0920 18:51:58.910202  287271 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:51:58.910358  287271 config.go:182] Loaded profile config "kubernetes-upgrade-149276": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:51:58.910448  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:51:58.913565  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:51:58.913965  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:50:51 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:51:58.913999  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:51:58.914232  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:51:58.914445  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:51:58.914651  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:51:58.914880  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:51:58.915104  287271 main.go:141] libmachine: Using SSH client type: native
	I0920 18:51:58.915290  287271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0920 18:51:58.915306  287271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:51:59.596385  287153 main.go:141] libmachine: (pause-554447) Calling .GetIP
	I0920 18:51:59.599413  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:59.599871  287153 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:51:59.599890  287153 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:51:59.600150  287153 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 18:51:59.604111  287153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:51:59.616376  287153 kubeadm.go:883] updating cluster {Name:pause-554447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-554447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:51:59.616478  287153 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:51:59.616519  287153 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:51:59.652689  287153 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:51:59.652746  287153 ssh_runner.go:195] Run: which lz4
	I0920 18:51:59.656700  287153 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:51:59.660696  287153 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:51:59.660721  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:52:00.882998  287153 crio.go:462] duration metric: took 1.226332706s to copy over tarball
	I0920 18:52:00.883066  287153 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:52:02.924376  287153 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.041280659s)
	I0920 18:52:02.924393  287153 crio.go:469] duration metric: took 2.041376225s to extract the tarball
	I0920 18:52:02.924404  287153 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:52:02.961747  287153 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:52:03.010490  287153 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:52:03.010503  287153 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:52:03.010509  287153 kubeadm.go:934] updating node { 192.168.61.38 8443 v1.31.1 crio true true} ...
	I0920 18:52:03.010620  287153 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-554447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-554447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:52:03.010706  287153 ssh_runner.go:195] Run: crio config
	I0920 18:52:03.055705  287153 cni.go:84] Creating CNI manager for ""
	I0920 18:52:03.055716  287153 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:52:03.055725  287153 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:52:03.055752  287153 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.38 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-554447 NodeName:pause-554447 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:52:03.055948  287153 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-554447"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:52:03.056020  287153 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:52:03.066360  287153 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:52:03.066505  287153 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:52:03.076541  287153 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0920 18:52:03.093348  287153 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:52:03.110895  287153 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 18:52:03.127997  287153 ssh_runner.go:195] Run: grep 192.168.61.38	control-plane.minikube.internal$ /etc/hosts
	I0920 18:52:03.132114  287153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:52:03.145619  287153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:52:03.274249  287153 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:52:03.292880  287153 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447 for IP: 192.168.61.38
	I0920 18:52:03.292894  287153 certs.go:194] generating shared ca certs ...
	I0920 18:52:03.292910  287153 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:03.293078  287153 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 18:52:03.293111  287153 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 18:52:03.293118  287153 certs.go:256] generating profile certs ...
	I0920 18:52:03.293185  287153 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/client.key
	I0920 18:52:03.293196  287153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/client.crt with IP's: []
	I0920 18:52:03.354572  287153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/client.crt ...
	I0920 18:52:03.354592  287153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/client.crt: {Name:mk1d12e8a98ad1f56ddbb20c31f57e5e0aa1ccf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:03.354831  287153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/client.key ...
	I0920 18:52:03.354844  287153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/client.key: {Name:mk90466c4c71078fc39c40d912ef693bb1cb2263 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:03.354968  287153 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.key.4bf6a80a
	I0920 18:52:03.354988  287153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.crt.4bf6a80a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.38]
	I0920 18:52:03.439765  287153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.crt.4bf6a80a ...
	I0920 18:52:03.439781  287153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.crt.4bf6a80a: {Name:mk39627802f082b2c40a437006d454122aab5381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:03.439952  287153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.key.4bf6a80a ...
	I0920 18:52:03.439959  287153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.key.4bf6a80a: {Name:mk57b9b204614dcc0d23209c2854d35bfda0a861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:03.440027  287153 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.crt.4bf6a80a -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.crt
	I0920 18:52:03.440092  287153 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.key.4bf6a80a -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.key
	I0920 18:52:03.440138  287153 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.key
	I0920 18:52:03.440147  287153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.crt with IP's: []
	I0920 18:52:06.087113  287659 start.go:364] duration metric: took 21.41112366s to acquireMachinesLock for "auto-793540"
	I0920 18:52:06.087206  287659 start.go:93] Provisioning new machine with config: &{Name:auto-793540 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:auto-793540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:52:06.087311  287659 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:52:05.832984  287271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:52:05.833013  287271 machine.go:96] duration metric: took 7.665204154s to provisionDockerMachine
	I0920 18:52:05.833028  287271 start.go:293] postStartSetup for "kubernetes-upgrade-149276" (driver="kvm2")
	I0920 18:52:05.833041  287271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:52:05.833063  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .DriverName
	I0920 18:52:05.833458  287271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:52:05.833496  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:52:05.837015  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:52:05.837551  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:50:51 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:52:05.837580  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:52:05.837789  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:52:05.837998  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:52:05.838212  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:52:05.838391  287271 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/id_rsa Username:docker}
	I0920 18:52:05.931510  287271 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:52:05.936632  287271 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:52:05.936661  287271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 18:52:05.936731  287271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 18:52:05.936806  287271 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 18:52:05.936904  287271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:52:05.948689  287271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:52:05.973289  287271 start.go:296] duration metric: took 140.24348ms for postStartSetup
	I0920 18:52:05.973334  287271 fix.go:56] duration metric: took 7.834181502s for fixHost
	I0920 18:52:05.973371  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:52:05.976305  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:52:05.976730  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:50:51 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:52:05.976760  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:52:05.976895  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:52:05.977122  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:52:05.977297  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:52:05.977489  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:52:05.977713  287271 main.go:141] libmachine: Using SSH client type: native
	I0920 18:52:05.977957  287271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0920 18:52:05.977972  287271 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:52:06.086910  287271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858326.081864794
	
	I0920 18:52:06.086945  287271 fix.go:216] guest clock: 1726858326.081864794
	I0920 18:52:06.086954  287271 fix.go:229] Guest: 2024-09-20 18:52:06.081864794 +0000 UTC Remote: 2024-09-20 18:52:05.973339091 +0000 UTC m=+49.285646263 (delta=108.525703ms)
	I0920 18:52:06.087001  287271 fix.go:200] guest clock delta is within tolerance: 108.525703ms
	I0920 18:52:06.087014  287271 start.go:83] releasing machines lock for "kubernetes-upgrade-149276", held for 7.947892148s
	I0920 18:52:06.087055  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .DriverName
	I0920 18:52:06.087372  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetIP
	I0920 18:52:06.090910  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:52:06.091366  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:50:51 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:52:06.091399  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:52:06.091542  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .DriverName
	I0920 18:52:06.092167  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .DriverName
	I0920 18:52:06.092419  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .DriverName
	I0920 18:52:06.092515  287271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:52:06.092582  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:52:06.092918  287271 ssh_runner.go:195] Run: cat /version.json
	I0920 18:52:06.092947  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHHostname
	I0920 18:52:06.095835  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:52:06.096032  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:52:06.096257  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:50:51 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:52:06.096297  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:52:06.096467  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:52:06.096547  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:50:51 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:52:06.096571  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:52:06.096670  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:52:06.096774  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHPort
	I0920 18:52:06.096867  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:52:06.096935  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHKeyPath
	I0920 18:52:06.097029  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetSSHUsername
	I0920 18:52:06.097073  287271 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/id_rsa Username:docker}
	I0920 18:52:06.097381  287271 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/kubernetes-upgrade-149276/id_rsa Username:docker}
	I0920 18:52:06.184236  287271 ssh_runner.go:195] Run: systemctl --version
	I0920 18:52:06.230221  287271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:52:06.402627  287271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:52:06.410832  287271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:52:06.410926  287271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:52:06.424936  287271 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 18:52:06.424969  287271 start.go:495] detecting cgroup driver to use...
	I0920 18:52:06.425048  287271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:52:06.447839  287271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:52:06.465087  287271 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:52:06.465158  287271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:52:06.482097  287271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:52:06.498561  287271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:52:06.690319  287271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:52:03.582408  287153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.crt ...
	I0920 18:52:03.582424  287153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.crt: {Name:mk87885a31a7a34ea807c0e569049946ac6008c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:03.582618  287153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.key ...
	I0920 18:52:03.582623  287153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.key: {Name:mk68c9e4b246bf1591ad7a870a5a21817396fd9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:03.582782  287153 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 18:52:03.582809  287153 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 18:52:03.582815  287153 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:52:03.582834  287153 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:52:03.582852  287153 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:52:03.582870  287153 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 18:52:03.582905  287153 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:52:03.584181  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:52:03.610426  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:52:03.633543  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:52:03.656584  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:52:03.680800  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:52:03.704119  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:52:03.727462  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:52:03.752796  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:52:03.777117  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:52:03.800821  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 18:52:03.826829  287153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 18:52:03.851277  287153 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:52:03.868469  287153 ssh_runner.go:195] Run: openssl version
	I0920 18:52:03.874960  287153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 18:52:03.885809  287153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 18:52:03.890215  287153 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:52:03.890283  287153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 18:52:03.896315  287153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:52:03.907741  287153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:52:03.919618  287153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:52:03.924494  287153 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:52:03.924556  287153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:52:03.930753  287153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:52:03.941697  287153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 18:52:03.953498  287153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 18:52:03.958255  287153 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:52:03.958310  287153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 18:52:03.964281  287153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 18:52:03.975620  287153 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:52:03.980267  287153 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:52:03.980317  287153 kubeadm.go:392] StartCluster: {Name:pause-554447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-554447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:52:03.980392  287153 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:52:03.980462  287153 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:52:04.019786  287153 cri.go:89] found id: ""
	I0920 18:52:04.019855  287153 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:52:04.030422  287153 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:52:04.040350  287153 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:52:04.050337  287153 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:52:04.050347  287153 kubeadm.go:157] found existing configuration files:
	
	I0920 18:52:04.050393  287153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:52:04.063070  287153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:52:04.063137  287153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:52:04.073939  287153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:52:04.085581  287153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:52:04.085665  287153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:52:04.097295  287153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:52:04.107108  287153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:52:04.107180  287153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:52:04.124818  287153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:52:04.140279  287153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:52:04.140335  287153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:52:04.150503  287153 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:52:04.282088  287153 kubeadm.go:310] W0920 18:52:04.275637     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:52:04.282994  287153 kubeadm.go:310] W0920 18:52:04.276838     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:52:04.385750  287153 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:52:06.089742  287659 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 18:52:06.089957  287659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:52:06.089996  287659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:52:06.107503  287659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35743
	I0920 18:52:06.108043  287659 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:52:06.108610  287659 main.go:141] libmachine: Using API Version  1
	I0920 18:52:06.108645  287659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:52:06.109025  287659 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:52:06.109261  287659 main.go:141] libmachine: (auto-793540) Calling .GetMachineName
	I0920 18:52:06.109415  287659 main.go:141] libmachine: (auto-793540) Calling .DriverName
	I0920 18:52:06.109572  287659 start.go:159] libmachine.API.Create for "auto-793540" (driver="kvm2")
	I0920 18:52:06.109598  287659 client.go:168] LocalClient.Create starting
	I0920 18:52:06.109634  287659 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 18:52:06.109684  287659 main.go:141] libmachine: Decoding PEM data...
	I0920 18:52:06.109705  287659 main.go:141] libmachine: Parsing certificate...
	I0920 18:52:06.109776  287659 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 18:52:06.109802  287659 main.go:141] libmachine: Decoding PEM data...
	I0920 18:52:06.109814  287659 main.go:141] libmachine: Parsing certificate...
	I0920 18:52:06.109834  287659 main.go:141] libmachine: Running pre-create checks...
	I0920 18:52:06.109842  287659 main.go:141] libmachine: (auto-793540) Calling .PreCreateCheck
	I0920 18:52:06.110321  287659 main.go:141] libmachine: (auto-793540) Calling .GetConfigRaw
	I0920 18:52:06.110865  287659 main.go:141] libmachine: Creating machine...
	I0920 18:52:06.110881  287659 main.go:141] libmachine: (auto-793540) Calling .Create
	I0920 18:52:06.111073  287659 main.go:141] libmachine: (auto-793540) Creating KVM machine...
	I0920 18:52:06.112547  287659 main.go:141] libmachine: (auto-793540) DBG | found existing default KVM network
	I0920 18:52:06.114361  287659 main.go:141] libmachine: (auto-793540) DBG | I0920 18:52:06.114182  288050 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000155f0}
	I0920 18:52:06.114386  287659 main.go:141] libmachine: (auto-793540) DBG | created network xml: 
	I0920 18:52:06.114399  287659 main.go:141] libmachine: (auto-793540) DBG | <network>
	I0920 18:52:06.114406  287659 main.go:141] libmachine: (auto-793540) DBG |   <name>mk-auto-793540</name>
	I0920 18:52:06.114415  287659 main.go:141] libmachine: (auto-793540) DBG |   <dns enable='no'/>
	I0920 18:52:06.114421  287659 main.go:141] libmachine: (auto-793540) DBG |   
	I0920 18:52:06.114431  287659 main.go:141] libmachine: (auto-793540) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 18:52:06.114441  287659 main.go:141] libmachine: (auto-793540) DBG |     <dhcp>
	I0920 18:52:06.114451  287659 main.go:141] libmachine: (auto-793540) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 18:52:06.114461  287659 main.go:141] libmachine: (auto-793540) DBG |     </dhcp>
	I0920 18:52:06.114472  287659 main.go:141] libmachine: (auto-793540) DBG |   </ip>
	I0920 18:52:06.114478  287659 main.go:141] libmachine: (auto-793540) DBG |   
	I0920 18:52:06.114486  287659 main.go:141] libmachine: (auto-793540) DBG | </network>
	I0920 18:52:06.114495  287659 main.go:141] libmachine: (auto-793540) DBG | 
	I0920 18:52:06.120667  287659 main.go:141] libmachine: (auto-793540) DBG | trying to create private KVM network mk-auto-793540 192.168.39.0/24...
	I0920 18:52:06.204890  287659 main.go:141] libmachine: (auto-793540) DBG | private KVM network mk-auto-793540 192.168.39.0/24 created
	I0920 18:52:06.204928  287659 main.go:141] libmachine: (auto-793540) DBG | I0920 18:52:06.204881  288050 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:52:06.204947  287659 main.go:141] libmachine: (auto-793540) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/auto-793540 ...
	I0920 18:52:06.204961  287659 main.go:141] libmachine: (auto-793540) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:52:06.205034  287659 main.go:141] libmachine: (auto-793540) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:52:06.493710  287659 main.go:141] libmachine: (auto-793540) DBG | I0920 18:52:06.493525  288050 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/auto-793540/id_rsa...
	I0920 18:52:06.558958  287659 main.go:141] libmachine: (auto-793540) DBG | I0920 18:52:06.558793  288050 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/auto-793540/auto-793540.rawdisk...
	I0920 18:52:06.558987  287659 main.go:141] libmachine: (auto-793540) DBG | Writing magic tar header
	I0920 18:52:06.559000  287659 main.go:141] libmachine: (auto-793540) DBG | Writing SSH key tar header
	I0920 18:52:06.559010  287659 main.go:141] libmachine: (auto-793540) DBG | I0920 18:52:06.558921  288050 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/auto-793540 ...
	I0920 18:52:06.559024  287659 main.go:141] libmachine: (auto-793540) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/auto-793540
	I0920 18:52:06.559048  287659 main.go:141] libmachine: (auto-793540) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/auto-793540 (perms=drwx------)
	I0920 18:52:06.559056  287659 main.go:141] libmachine: (auto-793540) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:52:06.559068  287659 main.go:141] libmachine: (auto-793540) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 18:52:06.559107  287659 main.go:141] libmachine: (auto-793540) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 18:52:06.559135  287659 main.go:141] libmachine: (auto-793540) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:52:06.559147  287659 main.go:141] libmachine: (auto-793540) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 18:52:06.559163  287659 main.go:141] libmachine: (auto-793540) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 18:52:06.559175  287659 main.go:141] libmachine: (auto-793540) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:52:06.559188  287659 main.go:141] libmachine: (auto-793540) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:52:06.559198  287659 main.go:141] libmachine: (auto-793540) DBG | Checking permissions on dir: /home
	I0920 18:52:06.559205  287659 main.go:141] libmachine: (auto-793540) DBG | Skipping /home - not owner
	I0920 18:52:06.559220  287659 main.go:141] libmachine: (auto-793540) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:52:06.559231  287659 main.go:141] libmachine: (auto-793540) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:52:06.559249  287659 main.go:141] libmachine: (auto-793540) Creating domain...
	I0920 18:52:06.560617  287659 main.go:141] libmachine: (auto-793540) define libvirt domain using xml: 
	I0920 18:52:06.560641  287659 main.go:141] libmachine: (auto-793540) <domain type='kvm'>
	I0920 18:52:06.560657  287659 main.go:141] libmachine: (auto-793540)   <name>auto-793540</name>
	I0920 18:52:06.560664  287659 main.go:141] libmachine: (auto-793540)   <memory unit='MiB'>3072</memory>
	I0920 18:52:06.560671  287659 main.go:141] libmachine: (auto-793540)   <vcpu>2</vcpu>
	I0920 18:52:06.560676  287659 main.go:141] libmachine: (auto-793540)   <features>
	I0920 18:52:06.560684  287659 main.go:141] libmachine: (auto-793540)     <acpi/>
	I0920 18:52:06.560690  287659 main.go:141] libmachine: (auto-793540)     <apic/>
	I0920 18:52:06.560698  287659 main.go:141] libmachine: (auto-793540)     <pae/>
	I0920 18:52:06.560707  287659 main.go:141] libmachine: (auto-793540)     
	I0920 18:52:06.560714  287659 main.go:141] libmachine: (auto-793540)   </features>
	I0920 18:52:06.560725  287659 main.go:141] libmachine: (auto-793540)   <cpu mode='host-passthrough'>
	I0920 18:52:06.560733  287659 main.go:141] libmachine: (auto-793540)   
	I0920 18:52:06.560739  287659 main.go:141] libmachine: (auto-793540)   </cpu>
	I0920 18:52:06.560747  287659 main.go:141] libmachine: (auto-793540)   <os>
	I0920 18:52:06.560753  287659 main.go:141] libmachine: (auto-793540)     <type>hvm</type>
	I0920 18:52:06.560761  287659 main.go:141] libmachine: (auto-793540)     <boot dev='cdrom'/>
	I0920 18:52:06.560767  287659 main.go:141] libmachine: (auto-793540)     <boot dev='hd'/>
	I0920 18:52:06.560777  287659 main.go:141] libmachine: (auto-793540)     <bootmenu enable='no'/>
	I0920 18:52:06.560788  287659 main.go:141] libmachine: (auto-793540)   </os>
	I0920 18:52:06.560795  287659 main.go:141] libmachine: (auto-793540)   <devices>
	I0920 18:52:06.560803  287659 main.go:141] libmachine: (auto-793540)     <disk type='file' device='cdrom'>
	I0920 18:52:06.560815  287659 main.go:141] libmachine: (auto-793540)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/auto-793540/boot2docker.iso'/>
	I0920 18:52:06.560825  287659 main.go:141] libmachine: (auto-793540)       <target dev='hdc' bus='scsi'/>
	I0920 18:52:06.560839  287659 main.go:141] libmachine: (auto-793540)       <readonly/>
	I0920 18:52:06.560845  287659 main.go:141] libmachine: (auto-793540)     </disk>
	I0920 18:52:06.560854  287659 main.go:141] libmachine: (auto-793540)     <disk type='file' device='disk'>
	I0920 18:52:06.560868  287659 main.go:141] libmachine: (auto-793540)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:52:06.560883  287659 main.go:141] libmachine: (auto-793540)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/auto-793540/auto-793540.rawdisk'/>
	I0920 18:52:06.560890  287659 main.go:141] libmachine: (auto-793540)       <target dev='hda' bus='virtio'/>
	I0920 18:52:06.560969  287659 main.go:141] libmachine: (auto-793540)     </disk>
	I0920 18:52:06.560995  287659 main.go:141] libmachine: (auto-793540)     <interface type='network'>
	I0920 18:52:06.561004  287659 main.go:141] libmachine: (auto-793540)       <source network='mk-auto-793540'/>
	I0920 18:52:06.561010  287659 main.go:141] libmachine: (auto-793540)       <model type='virtio'/>
	I0920 18:52:06.561015  287659 main.go:141] libmachine: (auto-793540)     </interface>
	I0920 18:52:06.561019  287659 main.go:141] libmachine: (auto-793540)     <interface type='network'>
	I0920 18:52:06.561025  287659 main.go:141] libmachine: (auto-793540)       <source network='default'/>
	I0920 18:52:06.561033  287659 main.go:141] libmachine: (auto-793540)       <model type='virtio'/>
	I0920 18:52:06.561041  287659 main.go:141] libmachine: (auto-793540)     </interface>
	I0920 18:52:06.561048  287659 main.go:141] libmachine: (auto-793540)     <serial type='pty'>
	I0920 18:52:06.561056  287659 main.go:141] libmachine: (auto-793540)       <target port='0'/>
	I0920 18:52:06.561070  287659 main.go:141] libmachine: (auto-793540)     </serial>
	I0920 18:52:06.561078  287659 main.go:141] libmachine: (auto-793540)     <console type='pty'>
	I0920 18:52:06.561085  287659 main.go:141] libmachine: (auto-793540)       <target type='serial' port='0'/>
	I0920 18:52:06.561093  287659 main.go:141] libmachine: (auto-793540)     </console>
	I0920 18:52:06.561099  287659 main.go:141] libmachine: (auto-793540)     <rng model='virtio'>
	I0920 18:52:06.561109  287659 main.go:141] libmachine: (auto-793540)       <backend model='random'>/dev/random</backend>
	I0920 18:52:06.561115  287659 main.go:141] libmachine: (auto-793540)     </rng>
	I0920 18:52:06.561148  287659 main.go:141] libmachine: (auto-793540)     
	I0920 18:52:06.561170  287659 main.go:141] libmachine: (auto-793540)     
	I0920 18:52:06.561181  287659 main.go:141] libmachine: (auto-793540)   </devices>
	I0920 18:52:06.561187  287659 main.go:141] libmachine: (auto-793540) </domain>
	I0920 18:52:06.561199  287659 main.go:141] libmachine: (auto-793540) 
	I0920 18:52:06.566945  287659 main.go:141] libmachine: (auto-793540) DBG | domain auto-793540 has defined MAC address 52:54:00:37:c4:b8 in network default
	I0920 18:52:06.567699  287659 main.go:141] libmachine: (auto-793540) Ensuring networks are active...
	I0920 18:52:06.567723  287659 main.go:141] libmachine: (auto-793540) DBG | domain auto-793540 has defined MAC address 52:54:00:81:d8:da in network mk-auto-793540
	I0920 18:52:06.568621  287659 main.go:141] libmachine: (auto-793540) Ensuring network default is active
	I0920 18:52:06.569115  287659 main.go:141] libmachine: (auto-793540) Ensuring network mk-auto-793540 is active
	I0920 18:52:06.570040  287659 main.go:141] libmachine: (auto-793540) Getting domain xml...
	I0920 18:52:06.571170  287659 main.go:141] libmachine: (auto-793540) Creating domain...
	I0920 18:52:07.936304  287659 main.go:141] libmachine: (auto-793540) Waiting to get IP...
	I0920 18:52:07.937231  287659 main.go:141] libmachine: (auto-793540) DBG | domain auto-793540 has defined MAC address 52:54:00:81:d8:da in network mk-auto-793540
	I0920 18:52:07.937749  287659 main.go:141] libmachine: (auto-793540) DBG | unable to find current IP address of domain auto-793540 in network mk-auto-793540
	I0920 18:52:07.937775  287659 main.go:141] libmachine: (auto-793540) DBG | I0920 18:52:07.937723  288050 retry.go:31] will retry after 243.685799ms: waiting for machine to come up
	I0920 18:52:08.183472  287659 main.go:141] libmachine: (auto-793540) DBG | domain auto-793540 has defined MAC address 52:54:00:81:d8:da in network mk-auto-793540
	I0920 18:52:08.184135  287659 main.go:141] libmachine: (auto-793540) DBG | unable to find current IP address of domain auto-793540 in network mk-auto-793540
	I0920 18:52:08.184170  287659 main.go:141] libmachine: (auto-793540) DBG | I0920 18:52:08.184076  288050 retry.go:31] will retry after 251.201714ms: waiting for machine to come up
	I0920 18:52:08.436940  287659 main.go:141] libmachine: (auto-793540) DBG | domain auto-793540 has defined MAC address 52:54:00:81:d8:da in network mk-auto-793540
	I0920 18:52:08.437470  287659 main.go:141] libmachine: (auto-793540) DBG | unable to find current IP address of domain auto-793540 in network mk-auto-793540
	I0920 18:52:08.437555  287659 main.go:141] libmachine: (auto-793540) DBG | I0920 18:52:08.437429  288050 retry.go:31] will retry after 349.009421ms: waiting for machine to come up
	I0920 18:52:08.789220  287659 main.go:141] libmachine: (auto-793540) DBG | domain auto-793540 has defined MAC address 52:54:00:81:d8:da in network mk-auto-793540
	I0920 18:52:08.790091  287659 main.go:141] libmachine: (auto-793540) DBG | unable to find current IP address of domain auto-793540 in network mk-auto-793540
	I0920 18:52:08.790120  287659 main.go:141] libmachine: (auto-793540) DBG | I0920 18:52:08.790018  288050 retry.go:31] will retry after 467.702682ms: waiting for machine to come up
	I0920 18:52:09.260041  287659 main.go:141] libmachine: (auto-793540) DBG | domain auto-793540 has defined MAC address 52:54:00:81:d8:da in network mk-auto-793540
	I0920 18:52:09.260593  287659 main.go:141] libmachine: (auto-793540) DBG | unable to find current IP address of domain auto-793540 in network mk-auto-793540
	I0920 18:52:09.260621  287659 main.go:141] libmachine: (auto-793540) DBG | I0920 18:52:09.260543  288050 retry.go:31] will retry after 558.217396ms: waiting for machine to come up
	I0920 18:52:06.927059  287271 docker.go:233] disabling docker service ...
	I0920 18:52:06.927152  287271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:52:07.028121  287271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:52:07.144033  287271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:52:07.495221  287271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:52:07.828277  287271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:52:07.862214  287271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:52:08.009738  287271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:52:08.009826  287271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:08.071686  287271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:52:08.071768  287271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:08.101099  287271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:08.217852  287271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:08.241665  287271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:52:08.261799  287271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:08.287053  287271 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:08.357006  287271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:08.419345  287271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:52:08.455614  287271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:52:08.473774  287271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:52:08.821321  287271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:52:09.528630  287271 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:52:09.528723  287271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:52:09.546712  287271 start.go:563] Will wait 60s for crictl version
	I0920 18:52:09.546838  287271 ssh_runner.go:195] Run: which crictl
	I0920 18:52:09.588713  287271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:52:09.747699  287271 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:52:09.747831  287271 ssh_runner.go:195] Run: crio --version
	I0920 18:52:10.054021  287271 ssh_runner.go:195] Run: crio --version
	I0920 18:52:10.217456  287271 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:52:10.219100  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) Calling .GetIP
	I0920 18:52:10.222737  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:52:10.223245  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:14:32", ip: ""} in network mk-kubernetes-upgrade-149276: {Iface:virbr2 ExpiryTime:2024-09-20 19:50:51 +0000 UTC Type:0 Mac:52:54:00:e7:14:32 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:kubernetes-upgrade-149276 Clientid:01:52:54:00:e7:14:32}
	I0920 18:52:10.223355  287271 main.go:141] libmachine: (kubernetes-upgrade-149276) DBG | domain kubernetes-upgrade-149276 has defined IP address 192.168.50.65 and MAC address 52:54:00:e7:14:32 in network mk-kubernetes-upgrade-149276
	I0920 18:52:10.223573  287271 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 18:52:10.243494  287271 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-149276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-149276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:52:10.243667  287271 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:52:10.243742  287271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:52:10.374394  287271 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:52:10.374432  287271 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:52:10.374499  287271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:52:10.472272  287271 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:52:10.472304  287271 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:52:10.472314  287271 kubeadm.go:934] updating node { 192.168.50.65 8443 v1.31.1 crio true true} ...
	I0920 18:52:10.472462  287271 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-149276 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-149276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:52:10.472558  287271 ssh_runner.go:195] Run: crio config
	I0920 18:52:10.649732  287271 cni.go:84] Creating CNI manager for ""
	I0920 18:52:10.649765  287271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:52:10.649779  287271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:52:10.649811  287271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.65 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-149276 NodeName:kubernetes-upgrade-149276 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:52:10.650099  287271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-149276"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:52:10.650194  287271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:52:10.665120  287271 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:52:10.665198  287271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:52:10.677812  287271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0920 18:52:10.698413  287271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:52:10.721406  287271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0920 18:52:10.740478  287271 ssh_runner.go:195] Run: grep 192.168.50.65	control-plane.minikube.internal$ /etc/hosts
	I0920 18:52:10.744947  287271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:52:10.938401  287271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:52:10.960060  287271 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276 for IP: 192.168.50.65
	I0920 18:52:10.960090  287271 certs.go:194] generating shared ca certs ...
	I0920 18:52:10.960115  287271 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:10.960334  287271 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 18:52:10.960398  287271 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 18:52:10.960414  287271 certs.go:256] generating profile certs ...
	I0920 18:52:10.960542  287271 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/client.key
	I0920 18:52:10.960608  287271 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/apiserver.key.e2a1fb56
	I0920 18:52:10.960664  287271 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/proxy-client.key
	I0920 18:52:10.960837  287271 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 18:52:10.960894  287271 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 18:52:10.960909  287271 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:52:10.960954  287271 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:52:10.960999  287271 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:52:10.961031  287271 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 18:52:10.961091  287271 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:52:10.962103  287271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:52:10.991540  287271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:52:11.022135  287271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:52:11.049857  287271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:52:11.078677  287271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 18:52:11.105309  287271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:52:11.132720  287271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:52:11.164061  287271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kubernetes-upgrade-149276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:52:11.195692  287271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 18:52:11.222985  287271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:52:11.255623  287271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 18:52:11.289596  287271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:52:11.311137  287271 ssh_runner.go:195] Run: openssl version
	I0920 18:52:11.317986  287271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 18:52:11.334015  287271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 18:52:11.339947  287271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:52:11.340025  287271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 18:52:11.347892  287271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:52:11.358756  287271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:52:11.371104  287271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:52:11.377094  287271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:52:11.377171  287271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:52:11.385078  287271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:52:11.398466  287271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 18:52:11.409810  287271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 18:52:11.415965  287271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:52:11.416054  287271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 18:52:11.423392  287271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 18:52:11.437352  287271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:52:11.442813  287271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:52:11.450904  287271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:52:11.458943  287271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:52:11.466558  287271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:52:11.473931  287271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:52:11.479982  287271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:52:11.488366  287271 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-149276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-149276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:52:11.488538  287271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:52:11.488711  287271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:52:11.533474  287271 cri.go:89] found id: "9774f8c6c4a34c8ed5054af97615b4c7d254a4e2b22505956854e0efab37f106"
	I0920 18:52:11.533503  287271 cri.go:89] found id: "a2b9bc0237d29df65016f29ebc0ee289c769987f6228fa54c78b3e553da90d91"
	I0920 18:52:11.533508  287271 cri.go:89] found id: "b95a8c68fd881126fcdf18658eb869e5df6fde55aa96e9f894ddcd8ebe204c8d"
	I0920 18:52:11.533523  287271 cri.go:89] found id: "5f11736f7f8348327e03f5c865663df69230d4c1571278ceafbc66009d161823"
	I0920 18:52:11.533526  287271 cri.go:89] found id: "a7db79b60f6a78701e8ffb3495fb217f2989e7ccc496c6f64795d0adc946f695"
	I0920 18:52:11.533529  287271 cri.go:89] found id: "66371130e3acaaa547bc6bbb74b35e96d16367982af9f3ed385027f4a104646b"
	I0920 18:52:11.533532  287271 cri.go:89] found id: "83946e189f105c295e37d6d7c10cff22c43eecdef53891906cd64d48cfdfac57"
	I0920 18:52:11.533534  287271 cri.go:89] found id: "008d37b55f20b481e151d9fbba914d3799be3a4e1b6ced0e5a8273657ef3043f"
	I0920 18:52:11.533536  287271 cri.go:89] found id: ""
	I0920 18:52:11.533586  287271 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.871731168Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858340871701438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6869a53a-9afc-4752-94db-e5e30b5b8070 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.872343655Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=339686cb-021f-49ae-a2d3-1b6e9f1d3755 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.872410359Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=339686cb-021f-49ae-a2d3-1b6e9f1d3755 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.872837439Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:91e9a87733ce40d148533b552e7032e12a0f2fd086dcf9c47f7637153d672b81,PodSandboxId:803a79f3603921bcf981e3e919b3e8208048ac735dbd333811928a7249f114c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858337818129229,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9t6kg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e55ce5-064a-46da-bd0a-a815daafece1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88296a6d73fe450942a14b9cdb27415c3d492104340115131e963be07b857750,PodSandboxId:4569f708a45de1d21beda7e69a32213385be9f879cdabf1628774ca0a304fcf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858337817425984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w4tjn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fbc3a78-e250-4c74-8ce6-03f88b879f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd94d052d833b7b5702cf83734ccf8a400ef993acf874c57cadaa96667f9176f,PodSandboxId:74747e02e99fca7a390206d847ad9c832987a0d161edc00887f7abffa16b702f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858337753176371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qp77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7af81633-6418-
486e-9117-334b8e2daf06,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eabc5ce14388f0ba5f7832fef8ef33fea50001ecfb1826e265bf1c1d83b5a875,PodSandboxId:af77ca25571f1b766fa42425aaed6713afeda0ad4c2506b9387ea4dcabb921db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
6858337765975812,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395fec89-b40a-450d-81e2-e2207369bcfe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24f2d37954cf1361ae0c7f9c0140f3dc7dd59606fda3a89047dfbce65ab5109,PodSandboxId:b7639afe657c3fb1c33cc3c22cd07230dca94b2baf5fe2d9d301894b28e38366,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858333968069808,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 394359626a0c7b90671c2c6e137bea21,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8624c5847b04d0b7dba02ab0f54ff37d6b2f82db3cb83c7c7ca48be85dfd7d93,PodSandboxId:80c41d394e01a6c2080afb03995eb4fefc053765bfc233d80cc57cc41795de92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858334011958
140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb4a226fc2e77b04431ab5f733bb4914,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1973c80ef6b5814ffdfdf6d232cbb8634b45859365fc478b649caf87364a18b,PodSandboxId:c6d231f5a8022dec014d6487cd14396a426c3dbeb8ff2825446bafc468c28c4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858333971
553690,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77244cda9055c4ee0eddf7610ae2f0d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b79cada94bd6d255829bbfc2ad1a461588afa5b03d521dc5f28aec9a3472741,PodSandboxId:6cbd05c9fa1bb2422a1e4a6253f791f3bda57eb24e8ad77c7080ed94792184d7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858333937131245,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b0f079393af0e3366ce5670347f21f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9774f8c6c4a34c8ed5054af97615b4c7d254a4e2b22505956854e0efab37f106,PodSandboxId:af77ca25571f1b766fa42425aaed6713afeda0ad4c2506b9387ea4dcabb921db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858330189519947,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395fec89-b40a-450d-81e2-e2207369bcfe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b9bc0237d29df65016f29ebc0ee289c769987f6228fa54c78b3e553da90d91,PodSandboxId:020b35741a76ee594782e1eec542bf74ea1b275f296b0f5de1023270799dc387,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858328612043085,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qp77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7af81633-6418-486e-9117-334b8e2daf06,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95a8c68fd881126fcdf18658eb869e5df6fde55aa96e9f894ddcd8ebe204c8d,PodSandboxId:8579d195d92a2f8dff965f0d56cbb6042ba86dfa7e4efffae8005633aad98b28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858328198570325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w4tjn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fbc3a78-e250-4c74-8ce6-03f88b879f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f11736f7f8348327e03f5c865663df69230d4c1571278ceafbc66009d161823,PodSandboxId:84a3e9243c85ff9050175475c2241a0c465429d52fdc3baa84b7d3e890a
ef1bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858327546260416,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 394359626a0c7b90671c2c6e137bea21,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66371130e3acaaa547bc6bbb74b35e96d16367982af9f3ed385027f4a104646b,PodSandboxId:464019d7d786d520fd0941db5efc381e7c34c6af7b267416de562b8c10161ed6,
Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858327333406160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b0f079393af0e3366ce5670347f21f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7db79b60f6a78701e8ffb3495fb217f2989e7ccc496c6f64795d0adc946f695,PodSandboxId:86edefa9c50d60d8345e710947a82a0a75dea4501ff92362e13297d947c1072e,Metada
ta:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858327336810977,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb4a226fc2e77b04431ab5f733bb4914,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83946e189f105c295e37d6d7c10cff22c43eecdef53891906cd64d48cfdfac57,PodSandboxId:9fbf8804292e894969c4e8b9db93bac246d114d647e1b8d2f9
caf37a76287840,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726858327256892032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77244cda9055c4ee0eddf7610ae2f0d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008d37b55f20b481e151d9fbba914d3799be3a4e1b6ced0e5a8273657ef3043f,PodSandboxId:907c344c5303abcb5dfea767b61cfa4c65df5426ba038dd65b87c6d05289e3ed,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858327144942162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9t6kg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e55ce5-064a-46da-bd0a-a815daafece1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=339686cb-021f-49ae-a2d3-1b6e9f1d3755 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.924077656Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9097c235-b6e8-4a8b-8bce-6cf688eeb801 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.924155503Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9097c235-b6e8-4a8b-8bce-6cf688eeb801 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.925354389Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ebe519c5-c261-4091-b306-85a736dc656d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.925813367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858340925781484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebe519c5-c261-4091-b306-85a736dc656d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.926574942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=986088c5-0f02-484a-8375-a4b970e39ce1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.926683740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=986088c5-0f02-484a-8375-a4b970e39ce1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.927019719Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:91e9a87733ce40d148533b552e7032e12a0f2fd086dcf9c47f7637153d672b81,PodSandboxId:803a79f3603921bcf981e3e919b3e8208048ac735dbd333811928a7249f114c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858337818129229,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9t6kg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e55ce5-064a-46da-bd0a-a815daafece1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88296a6d73fe450942a14b9cdb27415c3d492104340115131e963be07b857750,PodSandboxId:4569f708a45de1d21beda7e69a32213385be9f879cdabf1628774ca0a304fcf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858337817425984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w4tjn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fbc3a78-e250-4c74-8ce6-03f88b879f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd94d052d833b7b5702cf83734ccf8a400ef993acf874c57cadaa96667f9176f,PodSandboxId:74747e02e99fca7a390206d847ad9c832987a0d161edc00887f7abffa16b702f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858337753176371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qp77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7af81633-6418-
486e-9117-334b8e2daf06,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eabc5ce14388f0ba5f7832fef8ef33fea50001ecfb1826e265bf1c1d83b5a875,PodSandboxId:af77ca25571f1b766fa42425aaed6713afeda0ad4c2506b9387ea4dcabb921db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
6858337765975812,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395fec89-b40a-450d-81e2-e2207369bcfe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24f2d37954cf1361ae0c7f9c0140f3dc7dd59606fda3a89047dfbce65ab5109,PodSandboxId:b7639afe657c3fb1c33cc3c22cd07230dca94b2baf5fe2d9d301894b28e38366,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858333968069808,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 394359626a0c7b90671c2c6e137bea21,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8624c5847b04d0b7dba02ab0f54ff37d6b2f82db3cb83c7c7ca48be85dfd7d93,PodSandboxId:80c41d394e01a6c2080afb03995eb4fefc053765bfc233d80cc57cc41795de92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858334011958
140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb4a226fc2e77b04431ab5f733bb4914,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1973c80ef6b5814ffdfdf6d232cbb8634b45859365fc478b649caf87364a18b,PodSandboxId:c6d231f5a8022dec014d6487cd14396a426c3dbeb8ff2825446bafc468c28c4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858333971
553690,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77244cda9055c4ee0eddf7610ae2f0d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b79cada94bd6d255829bbfc2ad1a461588afa5b03d521dc5f28aec9a3472741,PodSandboxId:6cbd05c9fa1bb2422a1e4a6253f791f3bda57eb24e8ad77c7080ed94792184d7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858333937131245,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b0f079393af0e3366ce5670347f21f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9774f8c6c4a34c8ed5054af97615b4c7d254a4e2b22505956854e0efab37f106,PodSandboxId:af77ca25571f1b766fa42425aaed6713afeda0ad4c2506b9387ea4dcabb921db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858330189519947,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395fec89-b40a-450d-81e2-e2207369bcfe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b9bc0237d29df65016f29ebc0ee289c769987f6228fa54c78b3e553da90d91,PodSandboxId:020b35741a76ee594782e1eec542bf74ea1b275f296b0f5de1023270799dc387,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858328612043085,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qp77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7af81633-6418-486e-9117-334b8e2daf06,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95a8c68fd881126fcdf18658eb869e5df6fde55aa96e9f894ddcd8ebe204c8d,PodSandboxId:8579d195d92a2f8dff965f0d56cbb6042ba86dfa7e4efffae8005633aad98b28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858328198570325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w4tjn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fbc3a78-e250-4c74-8ce6-03f88b879f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f11736f7f8348327e03f5c865663df69230d4c1571278ceafbc66009d161823,PodSandboxId:84a3e9243c85ff9050175475c2241a0c465429d52fdc3baa84b7d3e890a
ef1bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858327546260416,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 394359626a0c7b90671c2c6e137bea21,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66371130e3acaaa547bc6bbb74b35e96d16367982af9f3ed385027f4a104646b,PodSandboxId:464019d7d786d520fd0941db5efc381e7c34c6af7b267416de562b8c10161ed6,
Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858327333406160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b0f079393af0e3366ce5670347f21f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7db79b60f6a78701e8ffb3495fb217f2989e7ccc496c6f64795d0adc946f695,PodSandboxId:86edefa9c50d60d8345e710947a82a0a75dea4501ff92362e13297d947c1072e,Metada
ta:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858327336810977,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb4a226fc2e77b04431ab5f733bb4914,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83946e189f105c295e37d6d7c10cff22c43eecdef53891906cd64d48cfdfac57,PodSandboxId:9fbf8804292e894969c4e8b9db93bac246d114d647e1b8d2f9
caf37a76287840,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726858327256892032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77244cda9055c4ee0eddf7610ae2f0d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008d37b55f20b481e151d9fbba914d3799be3a4e1b6ced0e5a8273657ef3043f,PodSandboxId:907c344c5303abcb5dfea767b61cfa4c65df5426ba038dd65b87c6d05289e3ed,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858327144942162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9t6kg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e55ce5-064a-46da-bd0a-a815daafece1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=986088c5-0f02-484a-8375-a4b970e39ce1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.980223092Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8162ba9b-20e3-4b97-9497-0334ee313839 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.980340868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8162ba9b-20e3-4b97-9497-0334ee313839 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.981998595Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bece9fa7-ea7d-4d7f-a0d1-8db4ffa295fa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.982649767Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858340982569164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bece9fa7-ea7d-4d7f-a0d1-8db4ffa295fa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.984251097Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=329f5c7e-358e-4391-83f9-f405de51a80d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.984339048Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=329f5c7e-358e-4391-83f9-f405de51a80d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:52:20 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:20.984922637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:91e9a87733ce40d148533b552e7032e12a0f2fd086dcf9c47f7637153d672b81,PodSandboxId:803a79f3603921bcf981e3e919b3e8208048ac735dbd333811928a7249f114c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858337818129229,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9t6kg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e55ce5-064a-46da-bd0a-a815daafece1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88296a6d73fe450942a14b9cdb27415c3d492104340115131e963be07b857750,PodSandboxId:4569f708a45de1d21beda7e69a32213385be9f879cdabf1628774ca0a304fcf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858337817425984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w4tjn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fbc3a78-e250-4c74-8ce6-03f88b879f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd94d052d833b7b5702cf83734ccf8a400ef993acf874c57cadaa96667f9176f,PodSandboxId:74747e02e99fca7a390206d847ad9c832987a0d161edc00887f7abffa16b702f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858337753176371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qp77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7af81633-6418-
486e-9117-334b8e2daf06,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eabc5ce14388f0ba5f7832fef8ef33fea50001ecfb1826e265bf1c1d83b5a875,PodSandboxId:af77ca25571f1b766fa42425aaed6713afeda0ad4c2506b9387ea4dcabb921db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
6858337765975812,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395fec89-b40a-450d-81e2-e2207369bcfe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24f2d37954cf1361ae0c7f9c0140f3dc7dd59606fda3a89047dfbce65ab5109,PodSandboxId:b7639afe657c3fb1c33cc3c22cd07230dca94b2baf5fe2d9d301894b28e38366,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858333968069808,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 394359626a0c7b90671c2c6e137bea21,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8624c5847b04d0b7dba02ab0f54ff37d6b2f82db3cb83c7c7ca48be85dfd7d93,PodSandboxId:80c41d394e01a6c2080afb03995eb4fefc053765bfc233d80cc57cc41795de92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858334011958
140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb4a226fc2e77b04431ab5f733bb4914,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1973c80ef6b5814ffdfdf6d232cbb8634b45859365fc478b649caf87364a18b,PodSandboxId:c6d231f5a8022dec014d6487cd14396a426c3dbeb8ff2825446bafc468c28c4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858333971
553690,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77244cda9055c4ee0eddf7610ae2f0d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b79cada94bd6d255829bbfc2ad1a461588afa5b03d521dc5f28aec9a3472741,PodSandboxId:6cbd05c9fa1bb2422a1e4a6253f791f3bda57eb24e8ad77c7080ed94792184d7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858333937131245,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b0f079393af0e3366ce5670347f21f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9774f8c6c4a34c8ed5054af97615b4c7d254a4e2b22505956854e0efab37f106,PodSandboxId:af77ca25571f1b766fa42425aaed6713afeda0ad4c2506b9387ea4dcabb921db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858330189519947,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395fec89-b40a-450d-81e2-e2207369bcfe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b9bc0237d29df65016f29ebc0ee289c769987f6228fa54c78b3e553da90d91,PodSandboxId:020b35741a76ee594782e1eec542bf74ea1b275f296b0f5de1023270799dc387,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858328612043085,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qp77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7af81633-6418-486e-9117-334b8e2daf06,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95a8c68fd881126fcdf18658eb869e5df6fde55aa96e9f894ddcd8ebe204c8d,PodSandboxId:8579d195d92a2f8dff965f0d56cbb6042ba86dfa7e4efffae8005633aad98b28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858328198570325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w4tjn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fbc3a78-e250-4c74-8ce6-03f88b879f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f11736f7f8348327e03f5c865663df69230d4c1571278ceafbc66009d161823,PodSandboxId:84a3e9243c85ff9050175475c2241a0c465429d52fdc3baa84b7d3e890a
ef1bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858327546260416,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 394359626a0c7b90671c2c6e137bea21,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66371130e3acaaa547bc6bbb74b35e96d16367982af9f3ed385027f4a104646b,PodSandboxId:464019d7d786d520fd0941db5efc381e7c34c6af7b267416de562b8c10161ed6,
Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858327333406160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b0f079393af0e3366ce5670347f21f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7db79b60f6a78701e8ffb3495fb217f2989e7ccc496c6f64795d0adc946f695,PodSandboxId:86edefa9c50d60d8345e710947a82a0a75dea4501ff92362e13297d947c1072e,Metada
ta:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858327336810977,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb4a226fc2e77b04431ab5f733bb4914,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83946e189f105c295e37d6d7c10cff22c43eecdef53891906cd64d48cfdfac57,PodSandboxId:9fbf8804292e894969c4e8b9db93bac246d114d647e1b8d2f9
caf37a76287840,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726858327256892032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77244cda9055c4ee0eddf7610ae2f0d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008d37b55f20b481e151d9fbba914d3799be3a4e1b6ced0e5a8273657ef3043f,PodSandboxId:907c344c5303abcb5dfea767b61cfa4c65df5426ba038dd65b87c6d05289e3ed,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858327144942162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9t6kg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e55ce5-064a-46da-bd0a-a815daafece1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=329f5c7e-358e-4391-83f9-f405de51a80d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:52:21 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:21.053304615Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2180455-317b-4c75-ae9c-ea620b17c6d5 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:52:21 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:21.053404682Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2180455-317b-4c75-ae9c-ea620b17c6d5 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:52:21 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:21.058028502Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b83033e5-ef1c-4876-a35f-131813460da2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:52:21 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:21.058516788Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858341058474995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b83033e5-ef1c-4876-a35f-131813460da2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:52:21 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:21.059983985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=491e056e-87b0-43c2-9b0f-8237b8cd0d6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:52:21 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:21.060096411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=491e056e-87b0-43c2-9b0f-8237b8cd0d6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:52:21 kubernetes-upgrade-149276 crio[3033]: time="2024-09-20 18:52:21.060762676Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:91e9a87733ce40d148533b552e7032e12a0f2fd086dcf9c47f7637153d672b81,PodSandboxId:803a79f3603921bcf981e3e919b3e8208048ac735dbd333811928a7249f114c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858337818129229,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9t6kg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e55ce5-064a-46da-bd0a-a815daafece1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88296a6d73fe450942a14b9cdb27415c3d492104340115131e963be07b857750,PodSandboxId:4569f708a45de1d21beda7e69a32213385be9f879cdabf1628774ca0a304fcf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858337817425984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w4tjn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fbc3a78-e250-4c74-8ce6-03f88b879f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd94d052d833b7b5702cf83734ccf8a400ef993acf874c57cadaa96667f9176f,PodSandboxId:74747e02e99fca7a390206d847ad9c832987a0d161edc00887f7abffa16b702f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858337753176371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qp77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7af81633-6418-
486e-9117-334b8e2daf06,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eabc5ce14388f0ba5f7832fef8ef33fea50001ecfb1826e265bf1c1d83b5a875,PodSandboxId:af77ca25571f1b766fa42425aaed6713afeda0ad4c2506b9387ea4dcabb921db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
6858337765975812,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395fec89-b40a-450d-81e2-e2207369bcfe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24f2d37954cf1361ae0c7f9c0140f3dc7dd59606fda3a89047dfbce65ab5109,PodSandboxId:b7639afe657c3fb1c33cc3c22cd07230dca94b2baf5fe2d9d301894b28e38366,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858333968069808,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 394359626a0c7b90671c2c6e137bea21,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8624c5847b04d0b7dba02ab0f54ff37d6b2f82db3cb83c7c7ca48be85dfd7d93,PodSandboxId:80c41d394e01a6c2080afb03995eb4fefc053765bfc233d80cc57cc41795de92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858334011958
140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb4a226fc2e77b04431ab5f733bb4914,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1973c80ef6b5814ffdfdf6d232cbb8634b45859365fc478b649caf87364a18b,PodSandboxId:c6d231f5a8022dec014d6487cd14396a426c3dbeb8ff2825446bafc468c28c4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858333971
553690,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77244cda9055c4ee0eddf7610ae2f0d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b79cada94bd6d255829bbfc2ad1a461588afa5b03d521dc5f28aec9a3472741,PodSandboxId:6cbd05c9fa1bb2422a1e4a6253f791f3bda57eb24e8ad77c7080ed94792184d7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858333937131245,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b0f079393af0e3366ce5670347f21f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9774f8c6c4a34c8ed5054af97615b4c7d254a4e2b22505956854e0efab37f106,PodSandboxId:af77ca25571f1b766fa42425aaed6713afeda0ad4c2506b9387ea4dcabb921db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858330189519947,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395fec89-b40a-450d-81e2-e2207369bcfe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b9bc0237d29df65016f29ebc0ee289c769987f6228fa54c78b3e553da90d91,PodSandboxId:020b35741a76ee594782e1eec542bf74ea1b275f296b0f5de1023270799dc387,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858328612043085,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qp77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7af81633-6418-486e-9117-334b8e2daf06,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95a8c68fd881126fcdf18658eb869e5df6fde55aa96e9f894ddcd8ebe204c8d,PodSandboxId:8579d195d92a2f8dff965f0d56cbb6042ba86dfa7e4efffae8005633aad98b28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858328198570325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w4tjn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fbc3a78-e250-4c74-8ce6-03f88b879f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f11736f7f8348327e03f5c865663df69230d4c1571278ceafbc66009d161823,PodSandboxId:84a3e9243c85ff9050175475c2241a0c465429d52fdc3baa84b7d3e890a
ef1bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858327546260416,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 394359626a0c7b90671c2c6e137bea21,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66371130e3acaaa547bc6bbb74b35e96d16367982af9f3ed385027f4a104646b,PodSandboxId:464019d7d786d520fd0941db5efc381e7c34c6af7b267416de562b8c10161ed6,
Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858327333406160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b0f079393af0e3366ce5670347f21f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7db79b60f6a78701e8ffb3495fb217f2989e7ccc496c6f64795d0adc946f695,PodSandboxId:86edefa9c50d60d8345e710947a82a0a75dea4501ff92362e13297d947c1072e,Metada
ta:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858327336810977,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb4a226fc2e77b04431ab5f733bb4914,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83946e189f105c295e37d6d7c10cff22c43eecdef53891906cd64d48cfdfac57,PodSandboxId:9fbf8804292e894969c4e8b9db93bac246d114d647e1b8d2f9
caf37a76287840,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726858327256892032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-149276,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77244cda9055c4ee0eddf7610ae2f0d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008d37b55f20b481e151d9fbba914d3799be3a4e1b6ced0e5a8273657ef3043f,PodSandboxId:907c344c5303abcb5dfea767b61cfa4c65df5426ba038dd65b87c6d05289e3ed,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858327144942162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9t6kg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e55ce5-064a-46da-bd0a-a815daafece1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=491e056e-87b0-43c2-9b0f-8237b8cd0d6f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	91e9a87733ce4       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago       Running             kube-proxy                2                   803a79f360392       kube-proxy-9t6kg
	88296a6d73fe4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   4569f708a45de       coredns-7c65d6cfc9-w4tjn
	eabc5ce14388f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   af77ca25571f1       storage-provisioner
	bd94d052d833b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   74747e02e99fc       coredns-7c65d6cfc9-qp77p
	8624c5847b04d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago       Running             kube-controller-manager   2                   80c41d394e01a       kube-controller-manager-kubernetes-upgrade-149276
	e1973c80ef6b5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   c6d231f5a8022       etcd-kubernetes-upgrade-149276
	f24f2d37954cf       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago       Running             kube-scheduler            2                   b7639afe657c3       kube-scheduler-kubernetes-upgrade-149276
	2b79cada94bd6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago       Running             kube-apiserver            2                   6cbd05c9fa1bb       kube-apiserver-kubernetes-upgrade-149276
	9774f8c6c4a34       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 seconds ago      Exited              storage-provisioner       2                   af77ca25571f1       storage-provisioner
	a2b9bc0237d29       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 seconds ago      Exited              coredns                   1                   020b35741a76e       coredns-7c65d6cfc9-qp77p
	b95a8c68fd881       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 seconds ago      Exited              coredns                   1                   8579d195d92a2       coredns-7c65d6cfc9-w4tjn
	5f11736f7f834       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   13 seconds ago      Exited              kube-scheduler            1                   84a3e9243c85f       kube-scheduler-kubernetes-upgrade-149276
	a7db79b60f6a7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   13 seconds ago      Exited              kube-controller-manager   1                   86edefa9c50d6       kube-controller-manager-kubernetes-upgrade-149276
	66371130e3aca       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   13 seconds ago      Exited              kube-apiserver            1                   464019d7d786d       kube-apiserver-kubernetes-upgrade-149276
	83946e189f105       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   13 seconds ago      Exited              etcd                      1                   9fbf8804292e8       etcd-kubernetes-upgrade-149276
	008d37b55f20b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   14 seconds ago      Exited              kube-proxy                1                   907c344c5303a       kube-proxy-9t6kg
	
	
	==> coredns [88296a6d73fe450942a14b9cdb27415c3d492104340115131e963be07b857750] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [a2b9bc0237d29df65016f29ebc0ee289c769987f6228fa54c78b3e553da90d91] <==
	
	
	==> coredns [b95a8c68fd881126fcdf18658eb869e5df6fde55aa96e9f894ddcd8ebe204c8d] <==
	
	
	==> coredns [bd94d052d833b7b5702cf83734ccf8a400ef993acf874c57cadaa96667f9176f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-149276
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-149276
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:51:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-149276
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:52:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:52:17 +0000   Fri, 20 Sep 2024 18:51:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:52:17 +0000   Fri, 20 Sep 2024 18:51:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:52:17 +0000   Fri, 20 Sep 2024 18:51:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:52:17 +0000   Fri, 20 Sep 2024 18:51:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.65
	  Hostname:    kubernetes-upgrade-149276
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8c6e4a2a71e24d70b5b5f85cd120fd46
	  System UUID:                8c6e4a2a-71e2-4d70-b5b5-f85cd120fd46
	  Boot ID:                    607c9401-ae92-4d01-8ac6-79bb70007ccb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-qp77p                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     61s
	  kube-system                 coredns-7c65d6cfc9-w4tjn                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     60s
	  kube-system                 etcd-kubernetes-upgrade-149276                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         66s
	  kube-system                 kube-apiserver-kubernetes-upgrade-149276             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-149276    200m (10%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-9t6kg                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-scheduler-kubernetes-upgrade-149276             100m (5%)     0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 59s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 73s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node kubernetes-upgrade-149276 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node kubernetes-upgrade-149276 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s (x7 over 73s)  kubelet          Node kubernetes-upgrade-149276 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           61s                node-controller  Node kubernetes-upgrade-149276 event: Registered Node kubernetes-upgrade-149276 in Controller
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-149276 event: Registered Node kubernetes-upgrade-149276 in Controller
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.822978] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.066149] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075791] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[Sep20 18:51] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.148203] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.310064] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +4.690068] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[  +0.082842] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.389295] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +7.739945] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.077099] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.356733] kauditd_printk_skb: 18 callbacks suppressed
	[Sep20 18:52] systemd-fstab-generator[2235]: Ignoring "noauto" option for root device
	[  +0.102002] kauditd_printk_skb: 88 callbacks suppressed
	[  +0.112303] systemd-fstab-generator[2266]: Ignoring "noauto" option for root device
	[  +0.527728] systemd-fstab-generator[2543]: Ignoring "noauto" option for root device
	[  +0.329764] systemd-fstab-generator[2681]: Ignoring "noauto" option for root device
	[  +0.960744] systemd-fstab-generator[2970]: Ignoring "noauto" option for root device
	[  +2.214046] systemd-fstab-generator[3868]: Ignoring "noauto" option for root device
	[  +2.365124] systemd-fstab-generator[3991]: Ignoring "noauto" option for root device
	[  +0.088503] kauditd_printk_skb: 303 callbacks suppressed
	[  +5.182671] kauditd_printk_skb: 61 callbacks suppressed
	[  +0.508099] systemd-fstab-generator[4523]: Ignoring "noauto" option for root device
	
	
	==> etcd [83946e189f105c295e37d6d7c10cff22c43eecdef53891906cd64d48cfdfac57] <==
	{"level":"info","ts":"2024-09-20T18:52:07.896552Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-20T18:52:07.947293Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"3b47abb9750ed10a","local-member-id":"176dee5a0ced0823","commit-index":445}
	{"level":"info","ts":"2024-09-20T18:52:07.947764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"176dee5a0ced0823 switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-20T18:52:07.947886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"176dee5a0ced0823 became follower at term 2"}
	{"level":"info","ts":"2024-09-20T18:52:07.947981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 176dee5a0ced0823 [peers: [], term: 2, commit: 445, applied: 0, lastindex: 445, lastterm: 2]"}
	{"level":"warn","ts":"2024-09-20T18:52:07.954125Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-20T18:52:08.018464Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":430}
	{"level":"info","ts":"2024-09-20T18:52:08.087948Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-20T18:52:08.099401Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"176dee5a0ced0823","timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:52:08.106061Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"176dee5a0ced0823"}
	{"level":"info","ts":"2024-09-20T18:52:08.106132Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"176dee5a0ced0823","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-20T18:52:08.108670Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-20T18:52:08.108808Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-20T18:52:08.108844Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-20T18:52:08.108852Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-20T18:52:08.109068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"176dee5a0ced0823 switched to configuration voters=(1688267505865132067)"}
	{"level":"info","ts":"2024-09-20T18:52:08.109114Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3b47abb9750ed10a","local-member-id":"176dee5a0ced0823","added-peer-id":"176dee5a0ced0823","added-peer-peer-urls":["https://192.168.50.65:2380"]}
	{"level":"info","ts":"2024-09-20T18:52:08.109201Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b47abb9750ed10a","local-member-id":"176dee5a0ced0823","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:08.109227Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:08.111742Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:52:08.113520Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T18:52:08.124195Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.65:2380"}
	{"level":"info","ts":"2024-09-20T18:52:08.124351Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.65:2380"}
	{"level":"info","ts":"2024-09-20T18:52:08.124913Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"176dee5a0ced0823","initial-advertise-peer-urls":["https://192.168.50.65:2380"],"listen-peer-urls":["https://192.168.50.65:2380"],"advertise-client-urls":["https://192.168.50.65:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.65:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T18:52:08.125085Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [e1973c80ef6b5814ffdfdf6d232cbb8634b45859365fc478b649caf87364a18b] <==
	{"level":"info","ts":"2024-09-20T18:52:14.446063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"176dee5a0ced0823 switched to configuration voters=(1688267505865132067)"}
	{"level":"info","ts":"2024-09-20T18:52:14.446193Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3b47abb9750ed10a","local-member-id":"176dee5a0ced0823","added-peer-id":"176dee5a0ced0823","added-peer-peer-urls":["https://192.168.50.65:2380"]}
	{"level":"info","ts":"2024-09-20T18:52:14.446317Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b47abb9750ed10a","local-member-id":"176dee5a0ced0823","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:14.446367Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:14.452015Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T18:52:14.460767Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"176dee5a0ced0823","initial-advertise-peer-urls":["https://192.168.50.65:2380"],"listen-peer-urls":["https://192.168.50.65:2380"],"advertise-client-urls":["https://192.168.50.65:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.65:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T18:52:14.460876Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T18:52:14.460958Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.65:2380"}
	{"level":"info","ts":"2024-09-20T18:52:14.463882Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.65:2380"}
	{"level":"info","ts":"2024-09-20T18:52:15.469030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"176dee5a0ced0823 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T18:52:15.469229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"176dee5a0ced0823 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T18:52:15.469277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"176dee5a0ced0823 received MsgPreVoteResp from 176dee5a0ced0823 at term 2"}
	{"level":"info","ts":"2024-09-20T18:52:15.469321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"176dee5a0ced0823 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T18:52:15.469346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"176dee5a0ced0823 received MsgVoteResp from 176dee5a0ced0823 at term 3"}
	{"level":"info","ts":"2024-09-20T18:52:15.469373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"176dee5a0ced0823 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T18:52:15.469399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 176dee5a0ced0823 elected leader 176dee5a0ced0823 at term 3"}
	{"level":"info","ts":"2024-09-20T18:52:15.476006Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:52:15.477033Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:52:15.477861Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.65:2379"}
	{"level":"info","ts":"2024-09-20T18:52:15.475966Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"176dee5a0ced0823","local-member-attributes":"{Name:kubernetes-upgrade-149276 ClientURLs:[https://192.168.50.65:2379]}","request-path":"/0/members/176dee5a0ced0823/attributes","cluster-id":"3b47abb9750ed10a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:52:15.478358Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:52:15.479022Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:52:15.479759Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:52:15.484665Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:52:15.484737Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:52:21 up 1 min,  0 users,  load average: 1.56, 0.44, 0.15
	Linux kubernetes-upgrade-149276 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2b79cada94bd6d255829bbfc2ad1a461588afa5b03d521dc5f28aec9a3472741] <==
	I0920 18:52:17.065107       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 18:52:17.070249       1 aggregator.go:171] initial CRD sync complete...
	I0920 18:52:17.070276       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 18:52:17.070282       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 18:52:17.070290       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:52:17.103302       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 18:52:17.108954       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 18:52:17.111067       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 18:52:17.111115       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 18:52:17.111136       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 18:52:17.111184       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:52:17.118807       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 18:52:17.126192       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 18:52:17.126242       1 policy_source.go:224] refreshing policies
	I0920 18:52:17.131692       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 18:52:17.131735       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 18:52:17.134380       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 18:52:17.941793       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 18:52:18.676074       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 18:52:18.711855       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 18:52:18.772696       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 18:52:18.844460       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 18:52:18.854430       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 18:52:19.850756       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 18:52:20.745061       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [66371130e3acaaa547bc6bbb74b35e96d16367982af9f3ed385027f4a104646b] <==
	I0920 18:52:08.350204       1 options.go:228] external host was not specified, using 192.168.50.65
	I0920 18:52:08.361029       1 server.go:142] Version: v1.31.1
	I0920 18:52:08.361084       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [8624c5847b04d0b7dba02ab0f54ff37d6b2f82db3cb83c7c7ca48be85dfd7d93] <==
	I0920 18:52:20.296003       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0920 18:52:20.296049       1 shared_informer.go:320] Caches are synced for PV protection
	I0920 18:52:20.296093       1 shared_informer.go:320] Caches are synced for service account
	I0920 18:52:20.292400       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0920 18:52:20.299208       1 shared_informer.go:320] Caches are synced for HPA
	I0920 18:52:20.300420       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0920 18:52:20.292694       1 shared_informer.go:320] Caches are synced for stateful set
	I0920 18:52:20.292713       1 shared_informer.go:320] Caches are synced for ephemeral
	I0920 18:52:20.304764       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0920 18:52:20.343271       1 shared_informer.go:320] Caches are synced for PVC protection
	I0920 18:52:20.345755       1 shared_informer.go:320] Caches are synced for deployment
	I0920 18:52:20.346521       1 shared_informer.go:320] Caches are synced for job
	I0920 18:52:20.390599       1 shared_informer.go:320] Caches are synced for persistent volume
	I0920 18:52:20.397577       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0920 18:52:20.409692       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="113.410003ms"
	I0920 18:52:20.409833       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.662µs"
	I0920 18:52:20.441923       1 shared_informer.go:320] Caches are synced for endpoint
	I0920 18:52:20.505049       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:52:20.530915       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:52:20.542379       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0920 18:52:20.542466       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-149276"
	I0920 18:52:20.542552       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0920 18:52:20.997186       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:52:20.997212       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 18:52:21.002904       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [a7db79b60f6a78701e8ffb3495fb217f2989e7ccc496c6f64795d0adc946f695] <==
	
	
	==> kube-proxy [008d37b55f20b481e151d9fbba914d3799be3a4e1b6ced0e5a8273657ef3043f] <==
	I0920 18:52:08.415595       1 server_linux.go:66] "Using iptables proxy"
	E0920 18:52:08.603447       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	
	
	==> kube-proxy [91e9a87733ce40d148533b552e7032e12a0f2fd086dcf9c47f7637153d672b81] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:52:18.211477       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:52:18.229419       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.65"]
	E0920 18:52:18.229506       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:52:18.280025       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:52:18.280080       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:52:18.280106       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:52:18.285847       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:52:18.286137       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:52:18.286170       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:52:18.288052       1 config.go:199] "Starting service config controller"
	I0920 18:52:18.288115       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:52:18.288150       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:52:18.288165       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:52:18.288745       1 config.go:328] "Starting node config controller"
	I0920 18:52:18.288776       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:52:18.388875       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:52:18.389090       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:52:18.389202       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5f11736f7f8348327e03f5c865663df69230d4c1571278ceafbc66009d161823] <==
	
	
	==> kube-scheduler [f24f2d37954cf1361ae0c7f9c0140f3dc7dd59606fda3a89047dfbce65ab5109] <==
	I0920 18:52:15.404206       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:52:16.973007       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:52:16.974525       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:52:16.974594       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:52:16.974701       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:52:17.063408       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:52:17.063444       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:52:17.067357       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:52:17.071158       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:52:17.072005       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:52:17.073739       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:52:17.172388       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:52:13 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:13.661171    3998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/77244cda9055c4ee0eddf7610ae2f0d8-etcd-data\") pod \"etcd-kubernetes-upgrade-149276\" (UID: \"77244cda9055c4ee0eddf7610ae2f0d8\") " pod="kube-system/etcd-kubernetes-upgrade-149276"
	Sep 20 18:52:13 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:13.661193    3998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85b0f079393af0e3366ce5670347f21f-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-149276\" (UID: \"85b0f079393af0e3366ce5670347f21f\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-149276"
	Sep 20 18:52:13 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:13.687880    3998 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-149276"
	Sep 20 18:52:13 kubernetes-upgrade-149276 kubelet[3998]: E0920 18:52:13.688675    3998 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.65:8443: connect: connection refused" node="kubernetes-upgrade-149276"
	Sep 20 18:52:13 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:13.891345    3998 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-149276"
	Sep 20 18:52:13 kubernetes-upgrade-149276 kubelet[3998]: E0920 18:52:13.892441    3998 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.65:8443: connect: connection refused" node="kubernetes-upgrade-149276"
	Sep 20 18:52:13 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:13.915139    3998 scope.go:117] "RemoveContainer" containerID="5f11736f7f8348327e03f5c865663df69230d4c1571278ceafbc66009d161823"
	Sep 20 18:52:13 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:13.915505    3998 scope.go:117] "RemoveContainer" containerID="83946e189f105c295e37d6d7c10cff22c43eecdef53891906cd64d48cfdfac57"
	Sep 20 18:52:13 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:13.915872    3998 scope.go:117] "RemoveContainer" containerID="a7db79b60f6a78701e8ffb3495fb217f2989e7ccc496c6f64795d0adc946f695"
	Sep 20 18:52:13 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:13.916090    3998 scope.go:117] "RemoveContainer" containerID="66371130e3acaaa547bc6bbb74b35e96d16367982af9f3ed385027f4a104646b"
	Sep 20 18:52:14 kubernetes-upgrade-149276 kubelet[3998]: E0920 18:52:14.060668    3998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-149276?timeout=10s\": dial tcp 192.168.50.65:8443: connect: connection refused" interval="800ms"
	Sep 20 18:52:14 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:14.294069    3998 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-149276"
	Sep 20 18:52:17 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:17.177212    3998 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-149276"
	Sep 20 18:52:17 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:17.177884    3998 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-149276"
	Sep 20 18:52:17 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:17.177997    3998 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 20 18:52:17 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:17.179266    3998 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 20 18:52:17 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:17.423554    3998 apiserver.go:52] "Watching apiserver"
	Sep 20 18:52:17 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:17.443114    3998 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 20 18:52:17 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:17.525064    3998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66e55ce5-064a-46da-bd0a-a815daafece1-xtables-lock\") pod \"kube-proxy-9t6kg\" (UID: \"66e55ce5-064a-46da-bd0a-a815daafece1\") " pod="kube-system/kube-proxy-9t6kg"
	Sep 20 18:52:17 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:17.525230    3998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/395fec89-b40a-450d-81e2-e2207369bcfe-tmp\") pod \"storage-provisioner\" (UID: \"395fec89-b40a-450d-81e2-e2207369bcfe\") " pod="kube-system/storage-provisioner"
	Sep 20 18:52:17 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:17.525337    3998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66e55ce5-064a-46da-bd0a-a815daafece1-lib-modules\") pod \"kube-proxy-9t6kg\" (UID: \"66e55ce5-064a-46da-bd0a-a815daafece1\") " pod="kube-system/kube-proxy-9t6kg"
	Sep 20 18:52:17 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:17.728752    3998 scope.go:117] "RemoveContainer" containerID="a2b9bc0237d29df65016f29ebc0ee289c769987f6228fa54c78b3e553da90d91"
	Sep 20 18:52:17 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:17.729054    3998 scope.go:117] "RemoveContainer" containerID="9774f8c6c4a34c8ed5054af97615b4c7d254a4e2b22505956854e0efab37f106"
	Sep 20 18:52:17 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:17.729156    3998 scope.go:117] "RemoveContainer" containerID="008d37b55f20b481e151d9fbba914d3799be3a4e1b6ced0e5a8273657ef3043f"
	Sep 20 18:52:17 kubernetes-upgrade-149276 kubelet[3998]: I0920 18:52:17.729349    3998 scope.go:117] "RemoveContainer" containerID="b95a8c68fd881126fcdf18658eb869e5df6fde55aa96e9f894ddcd8ebe204c8d"
	
	
	==> storage-provisioner [9774f8c6c4a34c8ed5054af97615b4c7d254a4e2b22505956854e0efab37f106] <==
	I0920 18:52:10.381335       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0920 18:52:10.384888       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [eabc5ce14388f0ba5f7832fef8ef33fea50001ecfb1826e265bf1c1d83b5a875] <==
	I0920 18:52:17.984888       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:52:18.007386       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:52:18.007461       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:52:20.440890  288266 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19679-237658/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-149276 -n kubernetes-upgrade-149276
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-149276 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-149276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-149276
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-149276: (1.12174727s)
--- FAIL: TestKubernetesUpgrade (404.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (92.49s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-554447 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-554447 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m26.813313856s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-554447] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-554447" primary control-plane node in "pause-554447" cluster
	* Updating the running kvm2 "pause-554447" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-554447" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:52:32.184860  288558 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:52:32.184997  288558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:52:32.185008  288558 out.go:358] Setting ErrFile to fd 2...
	I0920 18:52:32.185014  288558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:52:32.185220  288558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:52:32.185938  288558 out.go:352] Setting JSON to false
	I0920 18:52:32.186990  288558 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9295,"bootTime":1726849057,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:52:32.187106  288558 start.go:139] virtualization: kvm guest
	I0920 18:52:32.188835  288558 out.go:177] * [pause-554447] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:52:32.190003  288558 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:52:32.189998  288558 notify.go:220] Checking for updates...
	I0920 18:52:32.193083  288558 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:52:32.194815  288558 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:52:32.196734  288558 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:52:32.198530  288558 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:52:32.200007  288558 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:52:32.201926  288558 config.go:182] Loaded profile config "pause-554447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:52:32.202546  288558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:52:32.202616  288558 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:52:32.218959  288558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I0920 18:52:32.219500  288558 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:52:32.220085  288558 main.go:141] libmachine: Using API Version  1
	I0920 18:52:32.220114  288558 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:52:32.220611  288558 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:52:32.220818  288558 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:52:32.221128  288558 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:52:32.221580  288558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:52:32.221646  288558 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:52:32.237980  288558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41351
	I0920 18:52:32.238503  288558 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:52:32.239045  288558 main.go:141] libmachine: Using API Version  1
	I0920 18:52:32.239076  288558 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:52:32.239527  288558 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:52:32.239771  288558 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:52:32.278440  288558 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:52:32.280165  288558 start.go:297] selected driver: kvm2
	I0920 18:52:32.280192  288558 start.go:901] validating driver "kvm2" against &{Name:pause-554447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:pause-554447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:52:32.280350  288558 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:52:32.280746  288558 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:52:32.280861  288558 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:52:32.299764  288558 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:52:32.300489  288558 cni.go:84] Creating CNI manager for ""
	I0920 18:52:32.300536  288558 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:52:32.300603  288558 start.go:340] cluster config:
	{Name:pause-554447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-554447 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alia
ses:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:52:32.300744  288558 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:52:32.302614  288558 out.go:177] * Starting "pause-554447" primary control-plane node in "pause-554447" cluster
	I0920 18:52:32.303869  288558 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:52:32.303918  288558 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:52:32.303932  288558 cache.go:56] Caching tarball of preloaded images
	I0920 18:52:32.304049  288558 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:52:32.304058  288558 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:52:32.304173  288558 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/config.json ...
	I0920 18:52:32.304353  288558 start.go:360] acquireMachinesLock for pause-554447: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:53:21.822720  288558 start.go:364] duration metric: took 49.51832539s to acquireMachinesLock for "pause-554447"
	I0920 18:53:21.822800  288558 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:53:21.822832  288558 fix.go:54] fixHost starting: 
	I0920 18:53:21.823294  288558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:53:21.823338  288558 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:53:21.842294  288558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37999
	I0920 18:53:21.842953  288558 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:53:21.843554  288558 main.go:141] libmachine: Using API Version  1
	I0920 18:53:21.843585  288558 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:53:21.843963  288558 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:53:21.844166  288558 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:53:21.844311  288558 main.go:141] libmachine: (pause-554447) Calling .GetState
	I0920 18:53:21.846375  288558 fix.go:112] recreateIfNeeded on pause-554447: state=Running err=<nil>
	W0920 18:53:21.846403  288558 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:53:21.849095  288558 out.go:177] * Updating the running kvm2 "pause-554447" VM ...
	I0920 18:53:21.850543  288558 machine.go:93] provisionDockerMachine start ...
	I0920 18:53:21.850578  288558 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:53:21.850845  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:53:21.854605  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:21.855070  288558 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:53:21.855099  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:21.855284  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:53:21.855480  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:53:21.855649  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:53:21.855813  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:53:21.855992  288558 main.go:141] libmachine: Using SSH client type: native
	I0920 18:53:21.856240  288558 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0920 18:53:21.856271  288558 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:53:21.975306  288558 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-554447
	
	I0920 18:53:21.975339  288558 main.go:141] libmachine: (pause-554447) Calling .GetMachineName
	I0920 18:53:21.975733  288558 buildroot.go:166] provisioning hostname "pause-554447"
	I0920 18:53:21.975771  288558 main.go:141] libmachine: (pause-554447) Calling .GetMachineName
	I0920 18:53:21.975992  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:53:21.979215  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:21.979557  288558 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:53:21.979593  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:21.979759  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:53:21.979977  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:53:21.980157  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:53:21.980294  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:53:21.980450  288558 main.go:141] libmachine: Using SSH client type: native
	I0920 18:53:21.980692  288558 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0920 18:53:21.980710  288558 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-554447 && echo "pause-554447" | sudo tee /etc/hostname
	I0920 18:53:22.108927  288558 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-554447
	
	I0920 18:53:22.108997  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:53:22.113108  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:22.113612  288558 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:53:22.113674  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:22.113981  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:53:22.114219  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:53:22.114415  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:53:22.114609  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:53:22.114850  288558 main.go:141] libmachine: Using SSH client type: native
	I0920 18:53:22.115102  288558 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0920 18:53:22.115128  288558 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-554447' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-554447/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-554447' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:53:22.231818  288558 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:53:22.231858  288558 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 18:53:22.231883  288558 buildroot.go:174] setting up certificates
	I0920 18:53:22.231892  288558 provision.go:84] configureAuth start
	I0920 18:53:22.231901  288558 main.go:141] libmachine: (pause-554447) Calling .GetMachineName
	I0920 18:53:22.232133  288558 main.go:141] libmachine: (pause-554447) Calling .GetIP
	I0920 18:53:22.235583  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:22.236109  288558 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:53:22.236137  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:22.236382  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:53:22.239541  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:22.240043  288558 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:53:22.240073  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:22.240290  288558 provision.go:143] copyHostCerts
	I0920 18:53:22.240363  288558 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 18:53:22.240387  288558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:53:22.240455  288558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 18:53:22.240567  288558 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 18:53:22.240578  288558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:53:22.240610  288558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 18:53:22.240688  288558 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 18:53:22.240698  288558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:53:22.240725  288558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 18:53:22.240812  288558 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.pause-554447 san=[127.0.0.1 192.168.61.38 localhost minikube pause-554447]
	I0920 18:53:22.391625  288558 provision.go:177] copyRemoteCerts
	I0920 18:53:22.391699  288558 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:53:22.391726  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:53:22.394946  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:22.395344  288558 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:53:22.395375  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:22.395590  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:53:22.395810  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:53:22.395998  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:53:22.396133  288558 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/pause-554447/id_rsa Username:docker}
	I0920 18:53:22.485501  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:53:22.512914  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:53:22.541608  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:53:22.570832  288558 provision.go:87] duration metric: took 338.925607ms to configureAuth
	I0920 18:53:22.570861  288558 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:53:22.571111  288558 config.go:182] Loaded profile config "pause-554447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:53:22.571211  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:53:22.574861  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:22.575234  288558 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:53:22.575274  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:22.575617  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:53:22.575841  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:53:22.576018  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:53:22.576167  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:53:22.576358  288558 main.go:141] libmachine: Using SSH client type: native
	I0920 18:53:22.576599  288558 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0920 18:53:22.576620  288558 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:53:28.288100  288558 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:53:28.288135  288558 machine.go:96] duration metric: took 6.437571369s to provisionDockerMachine
	I0920 18:53:28.288149  288558 start.go:293] postStartSetup for "pause-554447" (driver="kvm2")
	I0920 18:53:28.288162  288558 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:53:28.288186  288558 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:53:28.288674  288558 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:53:28.288713  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:53:28.291795  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:28.292363  288558 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:53:28.292395  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:28.292717  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:53:28.292931  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:53:28.293116  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:53:28.293288  288558 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/pause-554447/id_rsa Username:docker}
	I0920 18:53:28.381618  288558 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:53:28.385929  288558 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:53:28.385961  288558 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 18:53:28.386044  288558 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 18:53:28.386144  288558 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 18:53:28.386264  288558 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:53:28.398332  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:53:28.430113  288558 start.go:296] duration metric: took 141.948594ms for postStartSetup
	I0920 18:53:28.430154  288558 fix.go:56] duration metric: took 6.607322255s for fixHost
	I0920 18:53:28.430181  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:53:28.433516  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:28.433871  288558 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:53:28.433954  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:28.434049  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:53:28.434219  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:53:28.434345  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:53:28.434441  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:53:28.434586  288558 main.go:141] libmachine: Using SSH client type: native
	I0920 18:53:28.434810  288558 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0920 18:53:28.434825  288558 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:53:28.555495  288558 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858408.550999240
	
	I0920 18:53:28.555531  288558 fix.go:216] guest clock: 1726858408.550999240
	I0920 18:53:28.555542  288558 fix.go:229] Guest: 2024-09-20 18:53:28.55099924 +0000 UTC Remote: 2024-09-20 18:53:28.430157981 +0000 UTC m=+56.293843652 (delta=120.841259ms)
	I0920 18:53:28.555574  288558 fix.go:200] guest clock delta is within tolerance: 120.841259ms
	I0920 18:53:28.555583  288558 start.go:83] releasing machines lock for "pause-554447", held for 6.7328068s
	I0920 18:53:28.555626  288558 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:53:28.555950  288558 main.go:141] libmachine: (pause-554447) Calling .GetIP
	I0920 18:53:28.559134  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:28.559681  288558 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:53:28.559719  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:28.559889  288558 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:53:28.566422  288558 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:53:28.566718  288558 main.go:141] libmachine: (pause-554447) Calling .DriverName
	I0920 18:53:28.566801  288558 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:53:28.566876  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:53:28.567162  288558 ssh_runner.go:195] Run: cat /version.json
	I0920 18:53:28.567193  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHHostname
	I0920 18:53:28.570830  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:28.571322  288558 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:53:28.571350  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:28.571662  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:53:28.571921  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:53:28.572141  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:53:28.572309  288558 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/pause-554447/id_rsa Username:docker}
	I0920 18:53:28.572690  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:28.572991  288558 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:53:28.573022  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:28.573242  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHPort
	I0920 18:53:28.573483  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHKeyPath
	I0920 18:53:28.573792  288558 main.go:141] libmachine: (pause-554447) Calling .GetSSHUsername
	I0920 18:53:28.573975  288558 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/pause-554447/id_rsa Username:docker}
	I0920 18:53:28.720065  288558 ssh_runner.go:195] Run: systemctl --version
	I0920 18:53:28.734773  288558 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:53:28.955157  288558 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:53:28.974559  288558 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:53:28.974644  288558 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:53:28.997605  288558 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 18:53:28.997635  288558 start.go:495] detecting cgroup driver to use...
	I0920 18:53:28.997714  288558 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:53:29.065370  288558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:53:29.103116  288558 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:53:29.103191  288558 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:53:29.145809  288558 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:53:29.206643  288558 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:53:29.462770  288558 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:53:29.703735  288558 docker.go:233] disabling docker service ...
	I0920 18:53:29.703821  288558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:53:29.752163  288558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:53:29.776650  288558 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:53:29.977798  288558 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:53:30.226991  288558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:53:30.266338  288558 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:53:30.309009  288558 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:53:30.309077  288558 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:53:30.325175  288558 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:53:30.325248  288558 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:53:30.342925  288558 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:53:30.371630  288558 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:53:30.384462  288558 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:53:30.398850  288558 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:53:30.413738  288558 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:53:30.441135  288558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:53:30.458450  288558 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:53:30.523429  288558 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:53:30.537748  288558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:53:30.730184  288558 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:53:31.642555  288558 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:53:31.642651  288558 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:53:31.648985  288558 start.go:563] Will wait 60s for crictl version
	I0920 18:53:31.649064  288558 ssh_runner.go:195] Run: which crictl
	I0920 18:53:31.654101  288558 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:53:31.702711  288558 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:53:31.702803  288558 ssh_runner.go:195] Run: crio --version
	I0920 18:53:31.742837  288558 ssh_runner.go:195] Run: crio --version
	I0920 18:53:31.838007  288558 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:53:31.839756  288558 main.go:141] libmachine: (pause-554447) Calling .GetIP
	I0920 18:53:31.844696  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:31.845381  288558 main.go:141] libmachine: (pause-554447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:66:a0", ip: ""} in network mk-pause-554447: {Iface:virbr3 ExpiryTime:2024-09-20 19:51:46 +0000 UTC Type:0 Mac:52:54:00:2c:66:a0 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:pause-554447 Clientid:01:52:54:00:2c:66:a0}
	I0920 18:53:31.845403  288558 main.go:141] libmachine: (pause-554447) DBG | domain pause-554447 has defined IP address 192.168.61.38 and MAC address 52:54:00:2c:66:a0 in network mk-pause-554447
	I0920 18:53:31.845738  288558 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 18:53:31.875324  288558 kubeadm.go:883] updating cluster {Name:pause-554447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-554447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-sec
urity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:53:31.875494  288558 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:53:31.875561  288558 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:53:32.170900  288558 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:53:32.170926  288558 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:53:32.170990  288558 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:53:32.395762  288558 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:53:32.395791  288558 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:53:32.395802  288558 kubeadm.go:934] updating node { 192.168.61.38 8443 v1.31.1 crio true true} ...
	I0920 18:53:32.395937  288558 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-554447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-554447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:53:32.396027  288558 ssh_runner.go:195] Run: crio config
	I0920 18:53:32.481642  288558 cni.go:84] Creating CNI manager for ""
	I0920 18:53:32.481678  288558 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:53:32.481691  288558 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:53:32.481721  288558 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.38 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-554447 NodeName:pause-554447 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:53:32.481937  288558 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-554447"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:53:32.482019  288558 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:53:32.496689  288558 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:53:32.496777  288558 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:53:32.514404  288558 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0920 18:53:32.545712  288558 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:53:32.581826  288558 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 18:53:32.603179  288558 ssh_runner.go:195] Run: grep 192.168.61.38	control-plane.minikube.internal$ /etc/hosts
	I0920 18:53:32.616278  288558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:53:32.796541  288558 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:53:32.822096  288558 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447 for IP: 192.168.61.38
	I0920 18:53:32.822125  288558 certs.go:194] generating shared ca certs ...
	I0920 18:53:32.822147  288558 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:53:32.822350  288558 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 18:53:32.822405  288558 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 18:53:32.822420  288558 certs.go:256] generating profile certs ...
	I0920 18:53:32.822525  288558 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/client.key
	I0920 18:53:32.822634  288558 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.key.4bf6a80a
	I0920 18:53:32.822698  288558 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.key
	I0920 18:53:32.822846  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 18:53:32.822889  288558 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 18:53:32.822904  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:53:32.822940  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:53:32.822973  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:53:32.823004  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 18:53:32.823064  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:53:32.823741  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:53:32.852035  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:53:32.885772  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:53:32.911843  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:53:32.951986  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:53:33.019869  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:53:33.052337  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:53:33.083916  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:53:33.122241  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 18:53:33.157454  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:53:33.190163  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 18:53:33.222729  288558 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:53:33.242742  288558 ssh_runner.go:195] Run: openssl version
	I0920 18:53:33.250955  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 18:53:33.264274  288558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 18:53:33.269125  288558 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:53:33.269194  288558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 18:53:33.276616  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:53:33.289078  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:53:33.303593  288558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:53:33.308792  288558 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:53:33.308880  288558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:53:33.315432  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:53:33.327713  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 18:53:33.340929  288558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 18:53:33.346925  288558 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:53:33.347002  288558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 18:53:33.353643  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 18:53:33.365392  288558 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:53:33.372217  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:53:33.380377  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:53:33.387564  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:53:33.395743  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:53:33.402595  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:53:33.408810  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:53:33.415252  288558 kubeadm.go:392] StartCluster: {Name:pause-554447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-554447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securi
ty-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:53:33.415439  288558 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:53:33.415516  288558 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:53:33.458794  288558 cri.go:89] found id: "59b28fc2b896a53558f83673c614029cb8a1db88928dc151b9287ff813a8dd82"
	I0920 18:53:33.458820  288558 cri.go:89] found id: "37c5ce73b96fd4db8562f7833f3c22837f6cb9687c761ff7aa9867050097dab2"
	I0920 18:53:33.458825  288558 cri.go:89] found id: "7ade82905607621ba39bf27ef1b0ba512f76fb2e3300b624dc1ce7c2168e6cd9"
	I0920 18:53:33.458828  288558 cri.go:89] found id: "e9eb1fb088d99924633f831fde758f31a45f76fe95fb90b925d61a70a6646849"
	I0920 18:53:33.458831  288558 cri.go:89] found id: "306ad987a3acfb2818b72735ddbc1e5c74c39b2964697da8e0a180411d79f0ad"
	I0920 18:53:33.458834  288558 cri.go:89] found id: "14f94689d54d9a49a49151a0add6061df363c8f566e3ac15eb124c728e25a26e"
	I0920 18:53:33.458837  288558 cri.go:89] found id: ""
	I0920 18:53:33.458886  288558 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-554447 -n pause-554447
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-554447 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-554447 logs -n 25: (2.309281016s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-793540 sudo cat                              | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo cat                              | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo systemctl                        | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | status docker --all --full                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo systemctl                        | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | cat docker --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo cat                              | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo docker                           | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo systemctl                        | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | status cri-docker --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo systemctl                        | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo cat                              | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo cat                              | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo                                  | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo systemctl                        | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo systemctl                        | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo cat                              | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo cat                              | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo containerd                       | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo systemctl                        | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo systemctl                        | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo find                             | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo crio                             | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-793540                                       | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	| start   | -p custom-flannel-793540                             | custom-flannel-793540 | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-793540 pgrep -a                           | kindnet-793540        | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | kubelet                                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-793540 sudo cat                           | kindnet-793540        | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | /etc/nsswitch.conf                                   |                       |         |         |                     |                     |
	| ssh     | -p kindnet-793540 sudo cat                           | kindnet-793540        | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | /etc/hosts                                           |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:53:34
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:53:34.761884  290520 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:53:34.762256  290520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:53:34.762304  290520 out.go:358] Setting ErrFile to fd 2...
	I0920 18:53:34.762320  290520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:53:34.762928  290520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:53:34.763682  290520 out.go:352] Setting JSON to false
	I0920 18:53:34.765542  290520 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9358,"bootTime":1726849057,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:53:34.765697  290520 start.go:139] virtualization: kvm guest
	I0920 18:53:34.768455  290520 out.go:177] * [custom-flannel-793540] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:53:34.770064  290520 notify.go:220] Checking for updates...
	I0920 18:53:34.770073  290520 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:53:34.771922  290520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:53:34.773347  290520 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:53:34.774890  290520 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:53:34.776611  290520 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:53:34.778173  290520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:53:34.780331  290520 config.go:182] Loaded profile config "calico-793540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:53:34.780485  290520 config.go:182] Loaded profile config "kindnet-793540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:53:34.780662  290520 config.go:182] Loaded profile config "pause-554447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:53:34.780766  290520 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:53:34.825420  290520 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:53:34.827080  290520 start.go:297] selected driver: kvm2
	I0920 18:53:34.827113  290520 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:53:34.827150  290520 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:53:34.828272  290520 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:53:34.828387  290520 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:53:34.846448  290520 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:53:34.846507  290520 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:53:34.846829  290520 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:53:34.846862  290520 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0920 18:53:34.846880  290520 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0920 18:53:34.846941  290520 start.go:340] cluster config:
	{Name:custom-flannel-793540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-793540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:53:34.847090  290520 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:53:34.849008  290520 out.go:177] * Starting "custom-flannel-793540" primary control-plane node in "custom-flannel-793540" cluster
	I0920 18:53:34.850494  290520 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:53:34.850545  290520 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:53:34.850556  290520 cache.go:56] Caching tarball of preloaded images
	I0920 18:53:34.850691  290520 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:53:34.850703  290520 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:53:34.850889  290520 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/config.json ...
	I0920 18:53:34.850919  290520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/config.json: {Name:mk19dc05f622b1071ca1610306d3a792c86b9b15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:53:34.851120  290520 start.go:360] acquireMachinesLock for custom-flannel-793540: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:53:34.851182  290520 start.go:364] duration metric: took 37.747µs to acquireMachinesLock for "custom-flannel-793540"
	I0920 18:53:34.851205  290520 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-793540 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-793540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:53:34.851298  290520 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:53:32.395762  288558 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:53:32.395791  288558 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:53:32.395802  288558 kubeadm.go:934] updating node { 192.168.61.38 8443 v1.31.1 crio true true} ...
	I0920 18:53:32.395937  288558 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-554447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-554447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:53:32.396027  288558 ssh_runner.go:195] Run: crio config
	I0920 18:53:32.481642  288558 cni.go:84] Creating CNI manager for ""
	I0920 18:53:32.481678  288558 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:53:32.481691  288558 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:53:32.481721  288558 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.38 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-554447 NodeName:pause-554447 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:53:32.481937  288558 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-554447"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:53:32.482019  288558 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:53:32.496689  288558 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:53:32.496777  288558 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:53:32.514404  288558 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0920 18:53:32.545712  288558 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:53:32.581826  288558 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 18:53:32.603179  288558 ssh_runner.go:195] Run: grep 192.168.61.38	control-plane.minikube.internal$ /etc/hosts
	I0920 18:53:32.616278  288558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:53:32.796541  288558 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:53:32.822096  288558 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447 for IP: 192.168.61.38
	I0920 18:53:32.822125  288558 certs.go:194] generating shared ca certs ...
	I0920 18:53:32.822147  288558 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:53:32.822350  288558 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 18:53:32.822405  288558 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 18:53:32.822420  288558 certs.go:256] generating profile certs ...
	I0920 18:53:32.822525  288558 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/client.key
	I0920 18:53:32.822634  288558 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.key.4bf6a80a
	I0920 18:53:32.822698  288558 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.key
	I0920 18:53:32.822846  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 18:53:32.822889  288558 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 18:53:32.822904  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:53:32.822940  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:53:32.822973  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:53:32.823004  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 18:53:32.823064  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:53:32.823741  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:53:32.852035  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:53:32.885772  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:53:32.911843  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:53:32.951986  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:53:33.019869  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:53:33.052337  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:53:33.083916  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:53:33.122241  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 18:53:33.157454  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:53:33.190163  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 18:53:33.222729  288558 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:53:33.242742  288558 ssh_runner.go:195] Run: openssl version
	I0920 18:53:33.250955  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 18:53:33.264274  288558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 18:53:33.269125  288558 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:53:33.269194  288558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 18:53:33.276616  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:53:33.289078  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:53:33.303593  288558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:53:33.308792  288558 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:53:33.308880  288558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:53:33.315432  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:53:33.327713  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 18:53:33.340929  288558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 18:53:33.346925  288558 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:53:33.347002  288558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 18:53:33.353643  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 18:53:33.365392  288558 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:53:33.372217  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:53:33.380377  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:53:33.387564  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:53:33.395743  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:53:33.402595  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:53:33.408810  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:53:33.415252  288558 kubeadm.go:392] StartCluster: {Name:pause-554447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-554447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securi
ty-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:53:33.415439  288558 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:53:33.415516  288558 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:53:33.458794  288558 cri.go:89] found id: "59b28fc2b896a53558f83673c614029cb8a1db88928dc151b9287ff813a8dd82"
	I0920 18:53:33.458820  288558 cri.go:89] found id: "37c5ce73b96fd4db8562f7833f3c22837f6cb9687c761ff7aa9867050097dab2"
	I0920 18:53:33.458825  288558 cri.go:89] found id: "7ade82905607621ba39bf27ef1b0ba512f76fb2e3300b624dc1ce7c2168e6cd9"
	I0920 18:53:33.458828  288558 cri.go:89] found id: "e9eb1fb088d99924633f831fde758f31a45f76fe95fb90b925d61a70a6646849"
	I0920 18:53:33.458831  288558 cri.go:89] found id: "306ad987a3acfb2818b72735ddbc1e5c74c39b2964697da8e0a180411d79f0ad"
	I0920 18:53:33.458834  288558 cri.go:89] found id: "14f94689d54d9a49a49151a0add6061df363c8f566e3ac15eb124c728e25a26e"
	I0920 18:53:33.458837  288558 cri.go:89] found id: ""
	I0920 18:53:33.458886  288558 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.867506050Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0126c225-fb25-46ff-8cd4-c50f0b76487e name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.870035456Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d38d559-c845-41ae-980b-47ce53c15a55 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.870841073Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858439870798770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d38d559-c845-41ae-980b-47ce53c15a55 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.872234746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48fb8278-3fe7-476d-b8b1-096a74b98d53 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.872311155Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48fb8278-3fe7-476d-b8b1-096a74b98d53 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.872782358Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ad94f0cc8e704ef8fee0a9c236ca31425da5910fa0abc7de3545d6fefa33272,PodSandboxId:109a6691b4270ed833775170a13020f9ee68cab4d146e5963956e9a50eb08b01,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858421368711329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869d49ac806ce100c88b85eed7a0506a0a3ce6ad3234984a07a3c7501a06bf70,PodSandboxId:015a34b7ffcd8219f9cfe7fd03af05aec6be9fe92e2c310a0132f4629e143311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858421354038418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3d3d2edb34d41873387590960fa51edb1d72c325f9b98eeefcb136cbc41c9c,PodSandboxId:cccedae6c29efbe565c80417dafb0d6f90e8519f003fe01bda2580b2c4206788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858416547616401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5242873eb735319c57a5e041c1d99599cc13bd7890d8de864304c0874fad874e,PodSandboxId:7a53615b7e8f05ffc7e2432198c9a52e999dc63ac963983e3e487117d2e5cca3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858416544990208,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
b88851eedf4f1dda1e93f5a4298515e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff327236bd6632d6c7e181d24c28830c253981dbe576a5d554a2fc44a5d10a4e,PodSandboxId:7ae72eb79b646fc523eb201f64538162a035ec5bb5f07dda4f10b45413ca7b64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858416508000544,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f07d07086c9f4616ea
aa38e6da8cbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fbf90dec670c2cc5f1832e4c3396317b604e471086ee944a7a05144c92123b,PodSandboxId:4081a44bc4c93760754d7c9e6149c387b30fed2a59d5a0865e29c8aee792c7c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858416477475211,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59b28fc2b896a53558f83673c614029cb8a1db88928dc151b9287ff813a8dd82,PodSandboxId:4aafcd213db7ab88ebcef21d6240343b4a1051fb1f7dee418919c38bad0496fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858410507600947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9eb1fb088d99924633f831fde758f31a45f76fe95fb90b925d61a70a6646849,PodSandboxId:ac3923fd473b5c4a448fa7439fa8cd97ed595dda49c623a00865e840a68a5329,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858410021591455,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c5ce73b96fd4db8562f7833f3c22837f6cb9687c761ff7aa9867050097dab2,PodSandboxId:f55f5f2246af8de7f72ec8638246f43f4cdd4b2f078ca267ab9cde5d1bd96376,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858410034909234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb88851eedf4f1dda1e93f5a4298515e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ade82905607621ba39bf27ef1b0ba512f76fb2e3300b624dc1ce7c2168e6cd9,PodSandboxId:e3cb0014834430737fe22f7311f7f639fd0433c817985b900c012cea1e102153,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858410029346377,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306ad987a3acfb2818b72735ddbc1e5c74c39b2964697da8e0a180411d79f0ad,PodSandboxId:95f0d6ae2b3eb2dc30f2e768200556d0e65964b804eb837dde81a2b03b57cd9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726858409580697693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f94689d54d9a49a49151a0add6061df363c8f566e3ac15eb124c728e25a26e,PodSandboxId:9d6cdd7159c5731676bf9d80083bff2cd9ba72d6f009c12e8e7844f07d0027fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858409372807130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 8f07d07086c9f4616eaaa38e6da8cbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48fb8278-3fe7-476d-b8b1-096a74b98d53 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.929042409Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56ac0e06-cfa1-4f6a-8ec6-c8b749713fcd name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.929163393Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56ac0e06-cfa1-4f6a-8ec6-c8b749713fcd name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.930429147Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88d0a9af-4dd4-4332-97eb-a365cb4ab72b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.931543853Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858439931502253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88d0a9af-4dd4-4332-97eb-a365cb4ab72b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.933339643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d6fa538-7af1-4aa6-b330-384645a03cde name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.933489156Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d6fa538-7af1-4aa6-b330-384645a03cde name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.933883993Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ad94f0cc8e704ef8fee0a9c236ca31425da5910fa0abc7de3545d6fefa33272,PodSandboxId:109a6691b4270ed833775170a13020f9ee68cab4d146e5963956e9a50eb08b01,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858421368711329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869d49ac806ce100c88b85eed7a0506a0a3ce6ad3234984a07a3c7501a06bf70,PodSandboxId:015a34b7ffcd8219f9cfe7fd03af05aec6be9fe92e2c310a0132f4629e143311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858421354038418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3d3d2edb34d41873387590960fa51edb1d72c325f9b98eeefcb136cbc41c9c,PodSandboxId:cccedae6c29efbe565c80417dafb0d6f90e8519f003fe01bda2580b2c4206788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858416547616401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5242873eb735319c57a5e041c1d99599cc13bd7890d8de864304c0874fad874e,PodSandboxId:7a53615b7e8f05ffc7e2432198c9a52e999dc63ac963983e3e487117d2e5cca3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858416544990208,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
b88851eedf4f1dda1e93f5a4298515e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff327236bd6632d6c7e181d24c28830c253981dbe576a5d554a2fc44a5d10a4e,PodSandboxId:7ae72eb79b646fc523eb201f64538162a035ec5bb5f07dda4f10b45413ca7b64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858416508000544,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f07d07086c9f4616ea
aa38e6da8cbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fbf90dec670c2cc5f1832e4c3396317b604e471086ee944a7a05144c92123b,PodSandboxId:4081a44bc4c93760754d7c9e6149c387b30fed2a59d5a0865e29c8aee792c7c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858416477475211,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59b28fc2b896a53558f83673c614029cb8a1db88928dc151b9287ff813a8dd82,PodSandboxId:4aafcd213db7ab88ebcef21d6240343b4a1051fb1f7dee418919c38bad0496fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858410507600947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9eb1fb088d99924633f831fde758f31a45f76fe95fb90b925d61a70a6646849,PodSandboxId:ac3923fd473b5c4a448fa7439fa8cd97ed595dda49c623a00865e840a68a5329,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858410021591455,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c5ce73b96fd4db8562f7833f3c22837f6cb9687c761ff7aa9867050097dab2,PodSandboxId:f55f5f2246af8de7f72ec8638246f43f4cdd4b2f078ca267ab9cde5d1bd96376,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858410034909234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb88851eedf4f1dda1e93f5a4298515e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ade82905607621ba39bf27ef1b0ba512f76fb2e3300b624dc1ce7c2168e6cd9,PodSandboxId:e3cb0014834430737fe22f7311f7f639fd0433c817985b900c012cea1e102153,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858410029346377,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306ad987a3acfb2818b72735ddbc1e5c74c39b2964697da8e0a180411d79f0ad,PodSandboxId:95f0d6ae2b3eb2dc30f2e768200556d0e65964b804eb837dde81a2b03b57cd9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726858409580697693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f94689d54d9a49a49151a0add6061df363c8f566e3ac15eb124c728e25a26e,PodSandboxId:9d6cdd7159c5731676bf9d80083bff2cd9ba72d6f009c12e8e7844f07d0027fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858409372807130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 8f07d07086c9f4616eaaa38e6da8cbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d6fa538-7af1-4aa6-b330-384645a03cde name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.986507181Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2ff59379-40d0-45b9-8de9-c1b7a7c7341d name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.986671156Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ff59379-40d0-45b9-8de9-c1b7a7c7341d name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.989952707Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4d3f58e-dcb9-42c5-a515-b348f8833930 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.990617155Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858439990577450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4d3f58e-dcb9-42c5-a515-b348f8833930 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.991549531Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30ba4422-385f-48df-9080-ab36fc6434b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.991743317Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30ba4422-385f-48df-9080-ab36fc6434b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:59 pause-554447 crio[2622]: time="2024-09-20 18:53:59.992182546Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ad94f0cc8e704ef8fee0a9c236ca31425da5910fa0abc7de3545d6fefa33272,PodSandboxId:109a6691b4270ed833775170a13020f9ee68cab4d146e5963956e9a50eb08b01,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858421368711329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869d49ac806ce100c88b85eed7a0506a0a3ce6ad3234984a07a3c7501a06bf70,PodSandboxId:015a34b7ffcd8219f9cfe7fd03af05aec6be9fe92e2c310a0132f4629e143311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858421354038418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3d3d2edb34d41873387590960fa51edb1d72c325f9b98eeefcb136cbc41c9c,PodSandboxId:cccedae6c29efbe565c80417dafb0d6f90e8519f003fe01bda2580b2c4206788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858416547616401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5242873eb735319c57a5e041c1d99599cc13bd7890d8de864304c0874fad874e,PodSandboxId:7a53615b7e8f05ffc7e2432198c9a52e999dc63ac963983e3e487117d2e5cca3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858416544990208,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
b88851eedf4f1dda1e93f5a4298515e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff327236bd6632d6c7e181d24c28830c253981dbe576a5d554a2fc44a5d10a4e,PodSandboxId:7ae72eb79b646fc523eb201f64538162a035ec5bb5f07dda4f10b45413ca7b64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858416508000544,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f07d07086c9f4616ea
aa38e6da8cbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fbf90dec670c2cc5f1832e4c3396317b604e471086ee944a7a05144c92123b,PodSandboxId:4081a44bc4c93760754d7c9e6149c387b30fed2a59d5a0865e29c8aee792c7c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858416477475211,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59b28fc2b896a53558f83673c614029cb8a1db88928dc151b9287ff813a8dd82,PodSandboxId:4aafcd213db7ab88ebcef21d6240343b4a1051fb1f7dee418919c38bad0496fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858410507600947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9eb1fb088d99924633f831fde758f31a45f76fe95fb90b925d61a70a6646849,PodSandboxId:ac3923fd473b5c4a448fa7439fa8cd97ed595dda49c623a00865e840a68a5329,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858410021591455,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c5ce73b96fd4db8562f7833f3c22837f6cb9687c761ff7aa9867050097dab2,PodSandboxId:f55f5f2246af8de7f72ec8638246f43f4cdd4b2f078ca267ab9cde5d1bd96376,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858410034909234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb88851eedf4f1dda1e93f5a4298515e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ade82905607621ba39bf27ef1b0ba512f76fb2e3300b624dc1ce7c2168e6cd9,PodSandboxId:e3cb0014834430737fe22f7311f7f639fd0433c817985b900c012cea1e102153,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858410029346377,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306ad987a3acfb2818b72735ddbc1e5c74c39b2964697da8e0a180411d79f0ad,PodSandboxId:95f0d6ae2b3eb2dc30f2e768200556d0e65964b804eb837dde81a2b03b57cd9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726858409580697693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f94689d54d9a49a49151a0add6061df363c8f566e3ac15eb124c728e25a26e,PodSandboxId:9d6cdd7159c5731676bf9d80083bff2cd9ba72d6f009c12e8e7844f07d0027fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858409372807130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 8f07d07086c9f4616eaaa38e6da8cbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=30ba4422-385f-48df-9080-ab36fc6434b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:54:00 pause-554447 crio[2622]: time="2024-09-20 18:54:00.035277479Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5bd7dda6-39a2-4426-b0d6-89b78b454333 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 18:54:00 pause-554447 crio[2622]: time="2024-09-20 18:54:00.035737538Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:109a6691b4270ed833775170a13020f9ee68cab4d146e5963956e9a50eb08b01,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-sszr2,Uid:9d7b0632-f5a8-4419-ae15-0a6b031982e1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726858412234228893,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:52:20.349522051Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4081a44bc4c93760754d7c9e6149c387b30fed2a59d5a0865e29c8aee792c7c6,Metadata:&PodSandboxMetadata{Name:etcd-pause-554447,Uid:ec9487357c13c5c1fbc6b9db12f00483,Namespace:kube-system,Attempt:2,
},State:SANDBOX_READY,CreatedAt:1726858412086276134,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.38:2379,kubernetes.io/config.hash: ec9487357c13c5c1fbc6b9db12f00483,kubernetes.io/config.seen: 2024-09-20T18:52:14.310077576Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:015a34b7ffcd8219f9cfe7fd03af05aec6be9fe92e2c310a0132f4629e143311,Metadata:&PodSandboxMetadata{Name:kube-proxy-p8m8l,Uid:3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726858412032805392,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:52:20.196821869Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cccedae6c29efbe565c80417dafb0d6f90e8519f003fe01bda2580b2c4206788,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-554447,Uid:465abce066ceb512184b124d93a2c6cd,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726858411928895723,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 465abce066ceb512184b124d93a2c6cd,kubernetes.io/config.seen: 2024-09-20T18:52:14.310074517Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7ae72eb79b646fc523eb201f64538162a0
35ec5bb5f07dda4f10b45413ca7b64,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-554447,Uid:8f07d07086c9f4616eaaa38e6da8cbae,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726858411917399792,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f07d07086c9f4616eaaa38e6da8cbae,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.38:8443,kubernetes.io/config.hash: 8f07d07086c9f4616eaaa38e6da8cbae,kubernetes.io/config.seen: 2024-09-20T18:52:14.310069618Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7a53615b7e8f05ffc7e2432198c9a52e999dc63ac963983e3e487117d2e5cca3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-554447,Uid:cb88851eedf4f1dda1e93f5a4298515e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726858411884922488,Label
s:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb88851eedf4f1dda1e93f5a4298515e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cb88851eedf4f1dda1e93f5a4298515e,kubernetes.io/config.seen: 2024-09-20T18:52:14.310076032Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5bd7dda6-39a2-4426-b0d6-89b78b454333 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 18:54:00 pause-554447 crio[2622]: time="2024-09-20 18:54:00.036914557Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd29a663-4d70-49b5-8a9f-90a4f4156405 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:54:00 pause-554447 crio[2622]: time="2024-09-20 18:54:00.037329417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd29a663-4d70-49b5-8a9f-90a4f4156405 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:54:00 pause-554447 crio[2622]: time="2024-09-20 18:54:00.037722573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ad94f0cc8e704ef8fee0a9c236ca31425da5910fa0abc7de3545d6fefa33272,PodSandboxId:109a6691b4270ed833775170a13020f9ee68cab4d146e5963956e9a50eb08b01,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858421368711329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869d49ac806ce100c88b85eed7a0506a0a3ce6ad3234984a07a3c7501a06bf70,PodSandboxId:015a34b7ffcd8219f9cfe7fd03af05aec6be9fe92e2c310a0132f4629e143311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858421354038418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3d3d2edb34d41873387590960fa51edb1d72c325f9b98eeefcb136cbc41c9c,PodSandboxId:cccedae6c29efbe565c80417dafb0d6f90e8519f003fe01bda2580b2c4206788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858416547616401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5242873eb735319c57a5e041c1d99599cc13bd7890d8de864304c0874fad874e,PodSandboxId:7a53615b7e8f05ffc7e2432198c9a52e999dc63ac963983e3e487117d2e5cca3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858416544990208,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
b88851eedf4f1dda1e93f5a4298515e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff327236bd6632d6c7e181d24c28830c253981dbe576a5d554a2fc44a5d10a4e,PodSandboxId:7ae72eb79b646fc523eb201f64538162a035ec5bb5f07dda4f10b45413ca7b64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858416508000544,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f07d07086c9f4616ea
aa38e6da8cbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fbf90dec670c2cc5f1832e4c3396317b604e471086ee944a7a05144c92123b,PodSandboxId:4081a44bc4c93760754d7c9e6149c387b30fed2a59d5a0865e29c8aee792c7c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858416477475211,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd29a663-4d70-49b5-8a9f-90a4f4156405 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7ad94f0cc8e70       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   18 seconds ago      Running             coredns                   2                   109a6691b4270       coredns-7c65d6cfc9-sszr2
	869d49ac806ce       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   18 seconds ago      Running             kube-proxy                2                   015a34b7ffcd8       kube-proxy-p8m8l
	ad3d3d2edb34d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   23 seconds ago      Running             kube-controller-manager   2                   cccedae6c29ef       kube-controller-manager-pause-554447
	5242873eb7353       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   23 seconds ago      Running             kube-scheduler            2                   7a53615b7e8f0       kube-scheduler-pause-554447
	ff327236bd663       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   23 seconds ago      Running             kube-apiserver            2                   7ae72eb79b646       kube-apiserver-pause-554447
	60fbf90dec670       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   23 seconds ago      Running             etcd                      2                   4081a44bc4c93       etcd-pause-554447
	59b28fc2b896a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   29 seconds ago      Exited              coredns                   1                   4aafcd213db7a       coredns-7c65d6cfc9-sszr2
	37c5ce73b96fd       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   30 seconds ago      Exited              kube-scheduler            1                   f55f5f2246af8       kube-scheduler-pause-554447
	7ade829056076       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   30 seconds ago      Exited              kube-controller-manager   1                   e3cb001483443       kube-controller-manager-pause-554447
	e9eb1fb088d99       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   30 seconds ago      Exited              kube-proxy                1                   ac3923fd473b5       kube-proxy-p8m8l
	306ad987a3acf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   30 seconds ago      Exited              etcd                      1                   95f0d6ae2b3eb       etcd-pause-554447
	14f94689d54d9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   30 seconds ago      Exited              kube-apiserver            1                   9d6cdd7159c57       kube-apiserver-pause-554447
	
	
	==> coredns [59b28fc2b896a53558f83673c614029cb8a1db88928dc151b9287ff813a8dd82] <==
	
	
	==> coredns [7ad94f0cc8e704ef8fee0a9c236ca31425da5910fa0abc7de3545d6fefa33272] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33710 - 17147 "HINFO IN 1689799688632273517.6893539958490244548. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014786329s
	
	
	==> describe nodes <==
	Name:               pause-554447
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-554447
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=pause-554447
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_52_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:52:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-554447
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:54:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:53:40 +0000   Fri, 20 Sep 2024 18:52:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:53:40 +0000   Fri, 20 Sep 2024 18:52:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:53:40 +0000   Fri, 20 Sep 2024 18:52:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:53:40 +0000   Fri, 20 Sep 2024 18:52:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.38
	  Hostname:    pause-554447
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a06f74c9c124fd4a324b67fd1939cc1
	  System UUID:                0a06f74c-9c12-4fd4-a324-b67fd1939cc1
	  Boot ID:                    6d363e35-343d-4666-850a-1830bb0d48d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-sszr2                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     100s
	  kube-system                 etcd-pause-554447                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         106s
	  kube-system                 kube-apiserver-pause-554447             250m (12%)    0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-controller-manager-pause-554447    200m (10%)    0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-proxy-p8m8l                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-scheduler-pause-554447             100m (5%)     0 (0%)      0 (0%)           0 (0%)         106s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  Starting                 18s                  kube-proxy       
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s (x8 over 113s)  kubelet          Node pause-554447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x8 over 113s)  kubelet          Node pause-554447 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x7 over 113s)  kubelet          Node pause-554447 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  113s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    106s                 kubelet          Node pause-554447 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  106s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  106s                 kubelet          Node pause-554447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     106s                 kubelet          Node pause-554447 status is now: NodeHasSufficientPID
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeReady                105s                 kubelet          Node pause-554447 status is now: NodeReady
	  Normal  RegisteredNode           101s                 node-controller  Node pause-554447 event: Registered Node pause-554447 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node pause-554447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node pause-554447 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node pause-554447 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                  node-controller  Node pause-554447 event: Registered Node pause-554447 in Controller
	
	
	==> dmesg <==
	[ +10.460297] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.064995] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069362] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.166079] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.141996] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.317741] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[Sep20 18:52] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.244931] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.071210] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.496670] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[  +0.120931] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.392451] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.140013] kauditd_printk_skb: 21 callbacks suppressed
	[  +9.040785] kauditd_printk_skb: 69 callbacks suppressed
	[Sep20 18:53] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[  +0.266600] systemd-fstab-generator[2308]: Ignoring "noauto" option for root device
	[  +0.285593] systemd-fstab-generator[2396]: Ignoring "noauto" option for root device
	[  +0.206114] systemd-fstab-generator[2413]: Ignoring "noauto" option for root device
	[  +0.539847] systemd-fstab-generator[2566]: Ignoring "noauto" option for root device
	[  +2.075851] systemd-fstab-generator[3137]: Ignoring "noauto" option for root device
	[  +3.089879] systemd-fstab-generator[3259]: Ignoring "noauto" option for root device
	[  +0.081724] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.609946] kauditd_printk_skb: 38 callbacks suppressed
	[  +9.753635] kauditd_printk_skb: 4 callbacks suppressed
	[  +4.575449] systemd-fstab-generator[3695]: Ignoring "noauto" option for root device
	
	
	==> etcd [306ad987a3acfb2818b72735ddbc1e5c74c39b2964697da8e0a180411d79f0ad] <==
	{"level":"warn","ts":"2024-09-20T18:53:30.328650Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-09-20T18:53:30.328791Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.61.38:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.61.38:2380","--initial-cluster=pause-554447=https://192.168.61.38:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.61.38:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.61.38:2380","--name=pause-554447","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-c
a-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-09-20T18:53:30.354678Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-09-20T18:53:30.354771Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-09-20T18:53:30.354796Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.61.38:2380"]}
	{"level":"info","ts":"2024-09-20T18:53:30.354909Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T18:53:30.356186Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.38:2379"]}
	{"level":"info","ts":"2024-09-20T18:53:30.356481Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-554447","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.61.38:2380"],"listen-peer-urls":["https://192.168.61.38:2380"],"advertise-client-urls":["https://192.168.61.38:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.38:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluste
r-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-09-20T18:53:30.372712Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"15.586512ms"}
	{"level":"info","ts":"2024-09-20T18:53:30.390543Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-20T18:53:30.401132Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"ac52cafbc0494bf3","local-member-id":"a85cda6b4b3fcaa2","commit-index":457}
	{"level":"info","ts":"2024-09-20T18:53:30.401318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-20T18:53:30.401432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 became follower at term 2"}
	{"level":"info","ts":"2024-09-20T18:53:30.401452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft a85cda6b4b3fcaa2 [peers: [], term: 2, commit: 457, applied: 0, lastindex: 457, lastterm: 2]"}
	
	
	==> etcd [60fbf90dec670c2cc5f1832e4c3396317b604e471086ee944a7a05144c92123b] <==
	{"level":"info","ts":"2024-09-20T18:53:36.931832Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ac52cafbc0494bf3","local-member-id":"a85cda6b4b3fcaa2","added-peer-id":"a85cda6b4b3fcaa2","added-peer-peer-urls":["https://192.168.61.38:2380"]}
	{"level":"info","ts":"2024-09-20T18:53:36.932017Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ac52cafbc0494bf3","local-member-id":"a85cda6b4b3fcaa2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:53:36.932082Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:53:36.943094Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T18:53:36.944044Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"a85cda6b4b3fcaa2","initial-advertise-peer-urls":["https://192.168.61.38:2380"],"listen-peer-urls":["https://192.168.61.38:2380"],"advertise-client-urls":["https://192.168.61.38:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.38:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T18:53:36.944161Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T18:53:36.944333Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.38:2380"}
	{"level":"info","ts":"2024-09-20T18:53:36.944415Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.38:2380"}
	{"level":"info","ts":"2024-09-20T18:53:38.567686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T18:53:38.567845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T18:53:38.567895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 received MsgPreVoteResp from a85cda6b4b3fcaa2 at term 2"}
	{"level":"info","ts":"2024-09-20T18:53:38.567937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T18:53:38.567966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 received MsgVoteResp from a85cda6b4b3fcaa2 at term 3"}
	{"level":"info","ts":"2024-09-20T18:53:38.568011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T18:53:38.568042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a85cda6b4b3fcaa2 elected leader a85cda6b4b3fcaa2 at term 3"}
	{"level":"info","ts":"2024-09-20T18:53:38.569295Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a85cda6b4b3fcaa2","local-member-attributes":"{Name:pause-554447 ClientURLs:[https://192.168.61.38:2379]}","request-path":"/0/members/a85cda6b4b3fcaa2/attributes","cluster-id":"ac52cafbc0494bf3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:53:38.569467Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:53:38.569589Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:53:38.570597Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:53:38.570634Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:53:38.571410Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:53:38.571353Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:53:38.572295Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:53:38.572759Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.38:2379"}
	{"level":"info","ts":"2024-09-20T18:53:50.005954Z","caller":"traceutil/trace.go:171","msg":"trace[1244357550] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"122.761098ms","start":"2024-09-20T18:53:49.883168Z","end":"2024-09-20T18:53:50.005929Z","steps":["trace[1244357550] 'process raft request'  (duration: 122.619238ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:54:01 up 2 min,  0 users,  load average: 2.20, 0.70, 0.25
	Linux pause-554447 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [14f94689d54d9a49a49151a0add6061df363c8f566e3ac15eb124c728e25a26e] <==
	I0920 18:53:30.411633       1 options.go:228] external host was not specified, using 192.168.61.38
	I0920 18:53:30.415095       1 server.go:142] Version: v1.31.1
	I0920 18:53:30.415141       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:53:30.961154       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0920 18:53:30.962231       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:53:30.962347       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0920 18:53:30.977106       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 18:53:30.980873       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0920 18:53:30.980898       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0920 18:53:30.981127       1 instance.go:232] Using reconciler: lease
	W0920 18:53:30.982113       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ff327236bd6632d6c7e181d24c28830c253981dbe576a5d554a2fc44a5d10a4e] <==
	I0920 18:53:40.222102       1 policy_source.go:224] refreshing policies
	I0920 18:53:40.279717       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 18:53:40.279986       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 18:53:40.280560       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 18:53:40.280006       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:53:40.280031       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 18:53:40.285896       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 18:53:40.285982       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 18:53:40.286026       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 18:53:40.286319       1 aggregator.go:171] initial CRD sync complete...
	I0920 18:53:40.286343       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 18:53:40.286350       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 18:53:40.286356       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:53:40.287128       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0920 18:53:40.298075       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0920 18:53:40.302472       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 18:53:40.308544       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 18:53:41.087023       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 18:53:42.467598       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 18:53:42.505068       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 18:53:42.568689       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 18:53:42.621840       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 18:53:42.634961       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 18:53:43.544624       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 18:53:43.940922       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7ade82905607621ba39bf27ef1b0ba512f76fb2e3300b624dc1ce7c2168e6cd9] <==
	
	
	==> kube-controller-manager [ad3d3d2edb34d41873387590960fa51edb1d72c325f9b98eeefcb136cbc41c9c] <==
	I0920 18:53:43.543421       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0920 18:53:43.543664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-554447"
	I0920 18:53:43.543730       1 shared_informer.go:320] Caches are synced for ephemeral
	I0920 18:53:43.544077       1 shared_informer.go:320] Caches are synced for deployment
	I0920 18:53:43.550801       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0920 18:53:43.552428       1 shared_informer.go:320] Caches are synced for namespace
	I0920 18:53:43.554804       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0920 18:53:43.565221       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0920 18:53:43.566643       1 shared_informer.go:320] Caches are synced for disruption
	I0920 18:53:43.575503       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0920 18:53:43.585423       1 shared_informer.go:320] Caches are synced for HPA
	I0920 18:53:43.603821       1 shared_informer.go:320] Caches are synced for cronjob
	I0920 18:53:43.636829       1 shared_informer.go:320] Caches are synced for job
	I0920 18:53:43.656521       1 shared_informer.go:320] Caches are synced for endpoint
	I0920 18:53:43.662068       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0920 18:53:43.671456       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0920 18:53:43.753255       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:53:43.753276       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:53:43.900088       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="344.782898ms"
	I0920 18:53:43.900225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="76.102µs"
	I0920 18:53:44.187438       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:53:44.196313       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:53:44.196471       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 18:53:51.190099       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="12.073238ms"
	I0920 18:53:51.190235       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="70.664µs"
	
	
	==> kube-proxy [869d49ac806ce100c88b85eed7a0506a0a3ce6ad3234984a07a3c7501a06bf70] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:53:41.727873       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:53:41.742708       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.38"]
	E0920 18:53:41.742820       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:53:41.790765       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:53:41.790841       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:53:41.790888       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:53:41.794565       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:53:41.794926       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:53:41.794951       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:53:41.796614       1 config.go:199] "Starting service config controller"
	I0920 18:53:41.796671       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:53:41.796700       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:53:41.796703       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:53:41.798960       1 config.go:328] "Starting node config controller"
	I0920 18:53:41.799003       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:53:41.897772       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:53:41.897877       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:53:41.899462       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e9eb1fb088d99924633f831fde758f31a45f76fe95fb90b925d61a70a6646849] <==
	
	
	==> kube-scheduler [37c5ce73b96fd4db8562f7833f3c22837f6cb9687c761ff7aa9867050097dab2] <==
	
	
	==> kube-scheduler [5242873eb735319c57a5e041c1d99599cc13bd7890d8de864304c0874fad874e] <==
	I0920 18:53:37.997926       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:53:40.163140       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:53:40.163180       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:53:40.163191       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:53:40.163203       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:53:40.203218       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:53:40.203679       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:53:40.209292       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:53:40.209494       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:53:40.209552       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:53:40.211654       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W0920 18:53:40.230186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0920 18:53:40.230551       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0920 18:53:40.230649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError"
	E0920 18:53:40.230497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError"
	I0920 18:53:40.310431       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:53:36 pause-554447 kubelet[3266]: I0920 18:53:36.456319    3266 scope.go:117] "RemoveContainer" containerID="306ad987a3acfb2818b72735ddbc1e5c74c39b2964697da8e0a180411d79f0ad"
	Sep 20 18:53:36 pause-554447 kubelet[3266]: I0920 18:53:36.460795    3266 scope.go:117] "RemoveContainer" containerID="7ade82905607621ba39bf27ef1b0ba512f76fb2e3300b624dc1ce7c2168e6cd9"
	Sep 20 18:53:36 pause-554447 kubelet[3266]: I0920 18:53:36.463160    3266 scope.go:117] "RemoveContainer" containerID="37c5ce73b96fd4db8562f7833f3c22837f6cb9687c761ff7aa9867050097dab2"
	Sep 20 18:53:36 pause-554447 kubelet[3266]: I0920 18:53:36.467294    3266 scope.go:117] "RemoveContainer" containerID="14f94689d54d9a49a49151a0add6061df363c8f566e3ac15eb124c728e25a26e"
	Sep 20 18:53:36 pause-554447 kubelet[3266]: E0920 18:53:36.631514    3266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-554447?timeout=10s\": dial tcp 192.168.61.38:8443: connect: connection refused" interval="800ms"
	Sep 20 18:53:36 pause-554447 kubelet[3266]: W0920 18:53:36.828075    3266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.61.38:8443: connect: connection refused
	Sep 20 18:53:36 pause-554447 kubelet[3266]: E0920 18:53:36.828160    3266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.61.38:8443: connect: connection refused" logger="UnhandledError"
	Sep 20 18:53:36 pause-554447 kubelet[3266]: I0920 18:53:36.855068    3266 kubelet_node_status.go:72] "Attempting to register node" node="pause-554447"
	Sep 20 18:53:36 pause-554447 kubelet[3266]: E0920 18:53:36.859251    3266 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.38:8443: connect: connection refused" node="pause-554447"
	Sep 20 18:53:37 pause-554447 kubelet[3266]: I0920 18:53:37.661501    3266 kubelet_node_status.go:72] "Attempting to register node" node="pause-554447"
	Sep 20 18:53:40 pause-554447 kubelet[3266]: I0920 18:53:40.269523    3266 kubelet_node_status.go:111] "Node was previously registered" node="pause-554447"
	Sep 20 18:53:40 pause-554447 kubelet[3266]: I0920 18:53:40.270075    3266 kubelet_node_status.go:75] "Successfully registered node" node="pause-554447"
	Sep 20 18:53:40 pause-554447 kubelet[3266]: I0920 18:53:40.270201    3266 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 20 18:53:40 pause-554447 kubelet[3266]: I0920 18:53:40.271737    3266 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 20 18:53:41 pause-554447 kubelet[3266]: I0920 18:53:41.010278    3266 apiserver.go:52] "Watching apiserver"
	Sep 20 18:53:41 pause-554447 kubelet[3266]: I0920 18:53:41.033686    3266 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 20 18:53:41 pause-554447 kubelet[3266]: I0920 18:53:41.100525    3266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e16e6c3-dc2a-4df5-8582-2ebf7026fe87-xtables-lock\") pod \"kube-proxy-p8m8l\" (UID: \"3e16e6c3-dc2a-4df5-8582-2ebf7026fe87\") " pod="kube-system/kube-proxy-p8m8l"
	Sep 20 18:53:41 pause-554447 kubelet[3266]: I0920 18:53:41.100793    3266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e16e6c3-dc2a-4df5-8582-2ebf7026fe87-lib-modules\") pod \"kube-proxy-p8m8l\" (UID: \"3e16e6c3-dc2a-4df5-8582-2ebf7026fe87\") " pod="kube-system/kube-proxy-p8m8l"
	Sep 20 18:53:41 pause-554447 kubelet[3266]: I0920 18:53:41.314777    3266 scope.go:117] "RemoveContainer" containerID="59b28fc2b896a53558f83673c614029cb8a1db88928dc151b9287ff813a8dd82"
	Sep 20 18:53:41 pause-554447 kubelet[3266]: I0920 18:53:41.315194    3266 scope.go:117] "RemoveContainer" containerID="e9eb1fb088d99924633f831fde758f31a45f76fe95fb90b925d61a70a6646849"
	Sep 20 18:53:46 pause-554447 kubelet[3266]: E0920 18:53:46.154294    3266 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858426153348078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:46 pause-554447 kubelet[3266]: E0920 18:53:46.154335    3266 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858426153348078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:51 pause-554447 kubelet[3266]: I0920 18:53:51.153279    3266 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 20 18:53:56 pause-554447 kubelet[3266]: E0920 18:53:56.155990    3266 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858436155574636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:56 pause-554447 kubelet[3266]: E0920 18:53:56.156030    3266 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858436155574636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:53:59.333189  291010 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19679-237658/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-554447 -n pause-554447
helpers_test.go:261: (dbg) Run:  kubectl --context pause-554447 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-554447 -n pause-554447
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-554447 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-554447 logs -n 25: (1.741454072s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-793540 sudo systemctl                        | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo cat                              | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo cat                              | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo                                  | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo systemctl                        | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo systemctl                        | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo cat                              | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo cat                              | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo containerd                       | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo systemctl                        | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo systemctl                        | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo find                             | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-793540 sudo crio                             | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-793540                                       | auto-793540           | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	| start   | -p custom-flannel-793540                             | custom-flannel-793540 | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-793540 pgrep -a                           | kindnet-793540        | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | kubelet                                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-793540 sudo cat                           | kindnet-793540        | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | /etc/nsswitch.conf                                   |                       |         |         |                     |                     |
	| ssh     | -p kindnet-793540 sudo cat                           | kindnet-793540        | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | /etc/hosts                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-793540 sudo cat                           | kindnet-793540        | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:53 UTC |
	|         | /etc/resolv.conf                                     |                       |         |         |                     |                     |
	| ssh     | -p kindnet-793540 sudo crictl                        | kindnet-793540        | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC | 20 Sep 24 18:54 UTC |
	|         | pods                                                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-793540 sudo crictl                        | kindnet-793540        | jenkins | v1.34.0 | 20 Sep 24 18:54 UTC | 20 Sep 24 18:54 UTC |
	|         | ps --all                                             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-793540 sudo find                          | kindnet-793540        | jenkins | v1.34.0 | 20 Sep 24 18:54 UTC | 20 Sep 24 18:54 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-793540 sudo ip a s                        | kindnet-793540        | jenkins | v1.34.0 | 20 Sep 24 18:54 UTC | 20 Sep 24 18:54 UTC |
	| ssh     | -p kindnet-793540 sudo ip r s                        | kindnet-793540        | jenkins | v1.34.0 | 20 Sep 24 18:54 UTC | 20 Sep 24 18:54 UTC |
	| ssh     | -p kindnet-793540 sudo                               | kindnet-793540        | jenkins | v1.34.0 | 20 Sep 24 18:54 UTC |                     |
	|         | iptables-save                                        |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:53:34
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:53:34.761884  290520 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:53:34.762256  290520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:53:34.762304  290520 out.go:358] Setting ErrFile to fd 2...
	I0920 18:53:34.762320  290520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:53:34.762928  290520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:53:34.763682  290520 out.go:352] Setting JSON to false
	I0920 18:53:34.765542  290520 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9358,"bootTime":1726849057,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:53:34.765697  290520 start.go:139] virtualization: kvm guest
	I0920 18:53:34.768455  290520 out.go:177] * [custom-flannel-793540] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:53:34.770064  290520 notify.go:220] Checking for updates...
	I0920 18:53:34.770073  290520 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:53:34.771922  290520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:53:34.773347  290520 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:53:34.774890  290520 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:53:34.776611  290520 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:53:34.778173  290520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:53:34.780331  290520 config.go:182] Loaded profile config "calico-793540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:53:34.780485  290520 config.go:182] Loaded profile config "kindnet-793540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:53:34.780662  290520 config.go:182] Loaded profile config "pause-554447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:53:34.780766  290520 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:53:34.825420  290520 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:53:34.827080  290520 start.go:297] selected driver: kvm2
	I0920 18:53:34.827113  290520 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:53:34.827150  290520 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:53:34.828272  290520 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:53:34.828387  290520 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:53:34.846448  290520 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:53:34.846507  290520 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:53:34.846829  290520 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:53:34.846862  290520 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0920 18:53:34.846880  290520 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0920 18:53:34.846941  290520 start.go:340] cluster config:
	{Name:custom-flannel-793540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-793540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:53:34.847090  290520 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:53:34.849008  290520 out.go:177] * Starting "custom-flannel-793540" primary control-plane node in "custom-flannel-793540" cluster
	I0920 18:53:34.850494  290520 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:53:34.850545  290520 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:53:34.850556  290520 cache.go:56] Caching tarball of preloaded images
	I0920 18:53:34.850691  290520 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:53:34.850703  290520 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:53:34.850889  290520 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/config.json ...
	I0920 18:53:34.850919  290520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/config.json: {Name:mk19dc05f622b1071ca1610306d3a792c86b9b15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:53:34.851120  290520 start.go:360] acquireMachinesLock for custom-flannel-793540: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:53:34.851182  290520 start.go:364] duration metric: took 37.747µs to acquireMachinesLock for "custom-flannel-793540"
	I0920 18:53:34.851205  290520 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-793540 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-793540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:53:34.851298  290520 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:53:32.395762  288558 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:53:32.395791  288558 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:53:32.395802  288558 kubeadm.go:934] updating node { 192.168.61.38 8443 v1.31.1 crio true true} ...
	I0920 18:53:32.395937  288558 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-554447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-554447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:53:32.396027  288558 ssh_runner.go:195] Run: crio config
	I0920 18:53:32.481642  288558 cni.go:84] Creating CNI manager for ""
	I0920 18:53:32.481678  288558 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:53:32.481691  288558 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:53:32.481721  288558 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.38 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-554447 NodeName:pause-554447 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:53:32.481937  288558 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-554447"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:53:32.482019  288558 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:53:32.496689  288558 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:53:32.496777  288558 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:53:32.514404  288558 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0920 18:53:32.545712  288558 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:53:32.581826  288558 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 18:53:32.603179  288558 ssh_runner.go:195] Run: grep 192.168.61.38	control-plane.minikube.internal$ /etc/hosts
	I0920 18:53:32.616278  288558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:53:32.796541  288558 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:53:32.822096  288558 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447 for IP: 192.168.61.38
	I0920 18:53:32.822125  288558 certs.go:194] generating shared ca certs ...
	I0920 18:53:32.822147  288558 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:53:32.822350  288558 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 18:53:32.822405  288558 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 18:53:32.822420  288558 certs.go:256] generating profile certs ...
	I0920 18:53:32.822525  288558 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/client.key
	I0920 18:53:32.822634  288558 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.key.4bf6a80a
	I0920 18:53:32.822698  288558 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.key
	I0920 18:53:32.822846  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 18:53:32.822889  288558 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 18:53:32.822904  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:53:32.822940  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:53:32.822973  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:53:32.823004  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 18:53:32.823064  288558 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:53:32.823741  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:53:32.852035  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:53:32.885772  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:53:32.911843  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:53:32.951986  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:53:33.019869  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:53:33.052337  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:53:33.083916  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/pause-554447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:53:33.122241  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 18:53:33.157454  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:53:33.190163  288558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 18:53:33.222729  288558 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:53:33.242742  288558 ssh_runner.go:195] Run: openssl version
	I0920 18:53:33.250955  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 18:53:33.264274  288558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 18:53:33.269125  288558 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:53:33.269194  288558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 18:53:33.276616  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:53:33.289078  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:53:33.303593  288558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:53:33.308792  288558 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:53:33.308880  288558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:53:33.315432  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:53:33.327713  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 18:53:33.340929  288558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 18:53:33.346925  288558 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:53:33.347002  288558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 18:53:33.353643  288558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 18:53:33.365392  288558 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:53:33.372217  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:53:33.380377  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:53:33.387564  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:53:33.395743  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:53:33.402595  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:53:33.408810  288558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:53:33.415252  288558 kubeadm.go:392] StartCluster: {Name:pause-554447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-554447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securi
ty-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:53:33.415439  288558 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:53:33.415516  288558 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:53:33.458794  288558 cri.go:89] found id: "59b28fc2b896a53558f83673c614029cb8a1db88928dc151b9287ff813a8dd82"
	I0920 18:53:33.458820  288558 cri.go:89] found id: "37c5ce73b96fd4db8562f7833f3c22837f6cb9687c761ff7aa9867050097dab2"
	I0920 18:53:33.458825  288558 cri.go:89] found id: "7ade82905607621ba39bf27ef1b0ba512f76fb2e3300b624dc1ce7c2168e6cd9"
	I0920 18:53:33.458828  288558 cri.go:89] found id: "e9eb1fb088d99924633f831fde758f31a45f76fe95fb90b925d61a70a6646849"
	I0920 18:53:33.458831  288558 cri.go:89] found id: "306ad987a3acfb2818b72735ddbc1e5c74c39b2964697da8e0a180411d79f0ad"
	I0920 18:53:33.458834  288558 cri.go:89] found id: "14f94689d54d9a49a49151a0add6061df363c8f566e3ac15eb124c728e25a26e"
	I0920 18:53:33.458837  288558 cri.go:89] found id: ""
	I0920 18:53:33.458886  288558 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.807511102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858442807470344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c78f5dc-6165-4c16-9e88-b88ba9c9733d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.808474153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=198c14a0-4471-4e0f-807c-899f5978d77a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.808590804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=198c14a0-4471-4e0f-807c-899f5978d77a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.809027982Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ad94f0cc8e704ef8fee0a9c236ca31425da5910fa0abc7de3545d6fefa33272,PodSandboxId:109a6691b4270ed833775170a13020f9ee68cab4d146e5963956e9a50eb08b01,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858421368711329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869d49ac806ce100c88b85eed7a0506a0a3ce6ad3234984a07a3c7501a06bf70,PodSandboxId:015a34b7ffcd8219f9cfe7fd03af05aec6be9fe92e2c310a0132f4629e143311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858421354038418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3d3d2edb34d41873387590960fa51edb1d72c325f9b98eeefcb136cbc41c9c,PodSandboxId:cccedae6c29efbe565c80417dafb0d6f90e8519f003fe01bda2580b2c4206788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858416547616401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5242873eb735319c57a5e041c1d99599cc13bd7890d8de864304c0874fad874e,PodSandboxId:7a53615b7e8f05ffc7e2432198c9a52e999dc63ac963983e3e487117d2e5cca3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858416544990208,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
b88851eedf4f1dda1e93f5a4298515e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff327236bd6632d6c7e181d24c28830c253981dbe576a5d554a2fc44a5d10a4e,PodSandboxId:7ae72eb79b646fc523eb201f64538162a035ec5bb5f07dda4f10b45413ca7b64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858416508000544,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f07d07086c9f4616ea
aa38e6da8cbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fbf90dec670c2cc5f1832e4c3396317b604e471086ee944a7a05144c92123b,PodSandboxId:4081a44bc4c93760754d7c9e6149c387b30fed2a59d5a0865e29c8aee792c7c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858416477475211,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59b28fc2b896a53558f83673c614029cb8a1db88928dc151b9287ff813a8dd82,PodSandboxId:4aafcd213db7ab88ebcef21d6240343b4a1051fb1f7dee418919c38bad0496fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858410507600947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9eb1fb088d99924633f831fde758f31a45f76fe95fb90b925d61a70a6646849,PodSandboxId:ac3923fd473b5c4a448fa7439fa8cd97ed595dda49c623a00865e840a68a5329,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858410021591455,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c5ce73b96fd4db8562f7833f3c22837f6cb9687c761ff7aa9867050097dab2,PodSandboxId:f55f5f2246af8de7f72ec8638246f43f4cdd4b2f078ca267ab9cde5d1bd96376,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858410034909234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb88851eedf4f1dda1e93f5a4298515e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ade82905607621ba39bf27ef1b0ba512f76fb2e3300b624dc1ce7c2168e6cd9,PodSandboxId:e3cb0014834430737fe22f7311f7f639fd0433c817985b900c012cea1e102153,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858410029346377,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306ad987a3acfb2818b72735ddbc1e5c74c39b2964697da8e0a180411d79f0ad,PodSandboxId:95f0d6ae2b3eb2dc30f2e768200556d0e65964b804eb837dde81a2b03b57cd9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726858409580697693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f94689d54d9a49a49151a0add6061df363c8f566e3ac15eb124c728e25a26e,PodSandboxId:9d6cdd7159c5731676bf9d80083bff2cd9ba72d6f009c12e8e7844f07d0027fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858409372807130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 8f07d07086c9f4616eaaa38e6da8cbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=198c14a0-4471-4e0f-807c-899f5978d77a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.869213707Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc8ead05-9724-48ff-b016-f907411e3151 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.869337064Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc8ead05-9724-48ff-b016-f907411e3151 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.871554723Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0569a75-b91a-4490-bf7d-54668df76937 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.871966312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858442871936081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0569a75-b91a-4490-bf7d-54668df76937 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.872552151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb247ec4-ca07-4cba-85da-70be2dc36d6a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.872635541Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb247ec4-ca07-4cba-85da-70be2dc36d6a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.873004667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ad94f0cc8e704ef8fee0a9c236ca31425da5910fa0abc7de3545d6fefa33272,PodSandboxId:109a6691b4270ed833775170a13020f9ee68cab4d146e5963956e9a50eb08b01,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858421368711329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869d49ac806ce100c88b85eed7a0506a0a3ce6ad3234984a07a3c7501a06bf70,PodSandboxId:015a34b7ffcd8219f9cfe7fd03af05aec6be9fe92e2c310a0132f4629e143311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858421354038418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3d3d2edb34d41873387590960fa51edb1d72c325f9b98eeefcb136cbc41c9c,PodSandboxId:cccedae6c29efbe565c80417dafb0d6f90e8519f003fe01bda2580b2c4206788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858416547616401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5242873eb735319c57a5e041c1d99599cc13bd7890d8de864304c0874fad874e,PodSandboxId:7a53615b7e8f05ffc7e2432198c9a52e999dc63ac963983e3e487117d2e5cca3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858416544990208,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
b88851eedf4f1dda1e93f5a4298515e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff327236bd6632d6c7e181d24c28830c253981dbe576a5d554a2fc44a5d10a4e,PodSandboxId:7ae72eb79b646fc523eb201f64538162a035ec5bb5f07dda4f10b45413ca7b64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858416508000544,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f07d07086c9f4616ea
aa38e6da8cbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fbf90dec670c2cc5f1832e4c3396317b604e471086ee944a7a05144c92123b,PodSandboxId:4081a44bc4c93760754d7c9e6149c387b30fed2a59d5a0865e29c8aee792c7c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858416477475211,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59b28fc2b896a53558f83673c614029cb8a1db88928dc151b9287ff813a8dd82,PodSandboxId:4aafcd213db7ab88ebcef21d6240343b4a1051fb1f7dee418919c38bad0496fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858410507600947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9eb1fb088d99924633f831fde758f31a45f76fe95fb90b925d61a70a6646849,PodSandboxId:ac3923fd473b5c4a448fa7439fa8cd97ed595dda49c623a00865e840a68a5329,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858410021591455,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c5ce73b96fd4db8562f7833f3c22837f6cb9687c761ff7aa9867050097dab2,PodSandboxId:f55f5f2246af8de7f72ec8638246f43f4cdd4b2f078ca267ab9cde5d1bd96376,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858410034909234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb88851eedf4f1dda1e93f5a4298515e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ade82905607621ba39bf27ef1b0ba512f76fb2e3300b624dc1ce7c2168e6cd9,PodSandboxId:e3cb0014834430737fe22f7311f7f639fd0433c817985b900c012cea1e102153,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858410029346377,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306ad987a3acfb2818b72735ddbc1e5c74c39b2964697da8e0a180411d79f0ad,PodSandboxId:95f0d6ae2b3eb2dc30f2e768200556d0e65964b804eb837dde81a2b03b57cd9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726858409580697693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f94689d54d9a49a49151a0add6061df363c8f566e3ac15eb124c728e25a26e,PodSandboxId:9d6cdd7159c5731676bf9d80083bff2cd9ba72d6f009c12e8e7844f07d0027fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858409372807130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 8f07d07086c9f4616eaaa38e6da8cbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb247ec4-ca07-4cba-85da-70be2dc36d6a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.922017701Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9fd61f80-93d6-4d6a-a594-d2c8dd0c0755 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.922097756Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9fd61f80-93d6-4d6a-a594-d2c8dd0c0755 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.923809994Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b035ff87-b397-4bb8-b3ea-35408850d747 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.924660681Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858442924629693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b035ff87-b397-4bb8-b3ea-35408850d747 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.925551546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ea4e5d5-7e83-4bc3-87f3-015ea1eb4350 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.925651138Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ea4e5d5-7e83-4bc3-87f3-015ea1eb4350 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.925995628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ad94f0cc8e704ef8fee0a9c236ca31425da5910fa0abc7de3545d6fefa33272,PodSandboxId:109a6691b4270ed833775170a13020f9ee68cab4d146e5963956e9a50eb08b01,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858421368711329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869d49ac806ce100c88b85eed7a0506a0a3ce6ad3234984a07a3c7501a06bf70,PodSandboxId:015a34b7ffcd8219f9cfe7fd03af05aec6be9fe92e2c310a0132f4629e143311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858421354038418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3d3d2edb34d41873387590960fa51edb1d72c325f9b98eeefcb136cbc41c9c,PodSandboxId:cccedae6c29efbe565c80417dafb0d6f90e8519f003fe01bda2580b2c4206788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858416547616401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5242873eb735319c57a5e041c1d99599cc13bd7890d8de864304c0874fad874e,PodSandboxId:7a53615b7e8f05ffc7e2432198c9a52e999dc63ac963983e3e487117d2e5cca3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858416544990208,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
b88851eedf4f1dda1e93f5a4298515e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff327236bd6632d6c7e181d24c28830c253981dbe576a5d554a2fc44a5d10a4e,PodSandboxId:7ae72eb79b646fc523eb201f64538162a035ec5bb5f07dda4f10b45413ca7b64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858416508000544,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f07d07086c9f4616ea
aa38e6da8cbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fbf90dec670c2cc5f1832e4c3396317b604e471086ee944a7a05144c92123b,PodSandboxId:4081a44bc4c93760754d7c9e6149c387b30fed2a59d5a0865e29c8aee792c7c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858416477475211,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59b28fc2b896a53558f83673c614029cb8a1db88928dc151b9287ff813a8dd82,PodSandboxId:4aafcd213db7ab88ebcef21d6240343b4a1051fb1f7dee418919c38bad0496fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858410507600947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9eb1fb088d99924633f831fde758f31a45f76fe95fb90b925d61a70a6646849,PodSandboxId:ac3923fd473b5c4a448fa7439fa8cd97ed595dda49c623a00865e840a68a5329,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858410021591455,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c5ce73b96fd4db8562f7833f3c22837f6cb9687c761ff7aa9867050097dab2,PodSandboxId:f55f5f2246af8de7f72ec8638246f43f4cdd4b2f078ca267ab9cde5d1bd96376,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858410034909234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb88851eedf4f1dda1e93f5a4298515e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ade82905607621ba39bf27ef1b0ba512f76fb2e3300b624dc1ce7c2168e6cd9,PodSandboxId:e3cb0014834430737fe22f7311f7f639fd0433c817985b900c012cea1e102153,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858410029346377,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306ad987a3acfb2818b72735ddbc1e5c74c39b2964697da8e0a180411d79f0ad,PodSandboxId:95f0d6ae2b3eb2dc30f2e768200556d0e65964b804eb837dde81a2b03b57cd9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726858409580697693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f94689d54d9a49a49151a0add6061df363c8f566e3ac15eb124c728e25a26e,PodSandboxId:9d6cdd7159c5731676bf9d80083bff2cd9ba72d6f009c12e8e7844f07d0027fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858409372807130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 8f07d07086c9f4616eaaa38e6da8cbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ea4e5d5-7e83-4bc3-87f3-015ea1eb4350 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.983740368Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef10ab1d-8daa-4ee2-9520-6e65d88378a0 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.983850042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef10ab1d-8daa-4ee2-9520-6e65d88378a0 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.985513224Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ca73db1-660f-4bee-bc5c-a4b0018cccb8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.986049851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858442986015245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ca73db1-660f-4bee-bc5c-a4b0018cccb8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.987239111Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef58b76f-d156-48fc-91a8-bfb703c0e15a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.987419946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef58b76f-d156-48fc-91a8-bfb703c0e15a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:54:02 pause-554447 crio[2622]: time="2024-09-20 18:54:02.988052929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ad94f0cc8e704ef8fee0a9c236ca31425da5910fa0abc7de3545d6fefa33272,PodSandboxId:109a6691b4270ed833775170a13020f9ee68cab4d146e5963956e9a50eb08b01,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858421368711329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869d49ac806ce100c88b85eed7a0506a0a3ce6ad3234984a07a3c7501a06bf70,PodSandboxId:015a34b7ffcd8219f9cfe7fd03af05aec6be9fe92e2c310a0132f4629e143311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858421354038418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3d3d2edb34d41873387590960fa51edb1d72c325f9b98eeefcb136cbc41c9c,PodSandboxId:cccedae6c29efbe565c80417dafb0d6f90e8519f003fe01bda2580b2c4206788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858416547616401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5242873eb735319c57a5e041c1d99599cc13bd7890d8de864304c0874fad874e,PodSandboxId:7a53615b7e8f05ffc7e2432198c9a52e999dc63ac963983e3e487117d2e5cca3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858416544990208,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
b88851eedf4f1dda1e93f5a4298515e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff327236bd6632d6c7e181d24c28830c253981dbe576a5d554a2fc44a5d10a4e,PodSandboxId:7ae72eb79b646fc523eb201f64538162a035ec5bb5f07dda4f10b45413ca7b64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858416508000544,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f07d07086c9f4616ea
aa38e6da8cbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fbf90dec670c2cc5f1832e4c3396317b604e471086ee944a7a05144c92123b,PodSandboxId:4081a44bc4c93760754d7c9e6149c387b30fed2a59d5a0865e29c8aee792c7c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858416477475211,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59b28fc2b896a53558f83673c614029cb8a1db88928dc151b9287ff813a8dd82,PodSandboxId:4aafcd213db7ab88ebcef21d6240343b4a1051fb1f7dee418919c38bad0496fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858410507600947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sszr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d7b0632-f5a8-4419-ae15-0a6b031982e1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9eb1fb088d99924633f831fde758f31a45f76fe95fb90b925d61a70a6646849,PodSandboxId:ac3923fd473b5c4a448fa7439fa8cd97ed595dda49c623a00865e840a68a5329,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858410021591455,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-p8m8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e16e6c3-dc2a-4df5-8582-2ebf7026fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c5ce73b96fd4db8562f7833f3c22837f6cb9687c761ff7aa9867050097dab2,PodSandboxId:f55f5f2246af8de7f72ec8638246f43f4cdd4b2f078ca267ab9cde5d1bd96376,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858410034909234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb88851eedf4f1dda1e93f5a4298515e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ade82905607621ba39bf27ef1b0ba512f76fb2e3300b624dc1ce7c2168e6cd9,PodSandboxId:e3cb0014834430737fe22f7311f7f639fd0433c817985b900c012cea1e102153,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858410029346377,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465abce066ceb512184b124d93a2c6cd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306ad987a3acfb2818b72735ddbc1e5c74c39b2964697da8e0a180411d79f0ad,PodSandboxId:95f0d6ae2b3eb2dc30f2e768200556d0e65964b804eb837dde81a2b03b57cd9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726858409580697693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-554447,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: ec9487357c13c5c1fbc6b9db12f00483,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f94689d54d9a49a49151a0add6061df363c8f566e3ac15eb124c728e25a26e,PodSandboxId:9d6cdd7159c5731676bf9d80083bff2cd9ba72d6f009c12e8e7844f07d0027fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858409372807130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-554447,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 8f07d07086c9f4616eaaa38e6da8cbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef58b76f-d156-48fc-91a8-bfb703c0e15a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7ad94f0cc8e70       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   21 seconds ago      Running             coredns                   2                   109a6691b4270       coredns-7c65d6cfc9-sszr2
	869d49ac806ce       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   21 seconds ago      Running             kube-proxy                2                   015a34b7ffcd8       kube-proxy-p8m8l
	ad3d3d2edb34d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   26 seconds ago      Running             kube-controller-manager   2                   cccedae6c29ef       kube-controller-manager-pause-554447
	5242873eb7353       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   26 seconds ago      Running             kube-scheduler            2                   7a53615b7e8f0       kube-scheduler-pause-554447
	ff327236bd663       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   26 seconds ago      Running             kube-apiserver            2                   7ae72eb79b646       kube-apiserver-pause-554447
	60fbf90dec670       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   26 seconds ago      Running             etcd                      2                   4081a44bc4c93       etcd-pause-554447
	59b28fc2b896a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   32 seconds ago      Exited              coredns                   1                   4aafcd213db7a       coredns-7c65d6cfc9-sszr2
	37c5ce73b96fd       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   33 seconds ago      Exited              kube-scheduler            1                   f55f5f2246af8       kube-scheduler-pause-554447
	7ade829056076       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   33 seconds ago      Exited              kube-controller-manager   1                   e3cb001483443       kube-controller-manager-pause-554447
	e9eb1fb088d99       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   33 seconds ago      Exited              kube-proxy                1                   ac3923fd473b5       kube-proxy-p8m8l
	306ad987a3acf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   33 seconds ago      Exited              etcd                      1                   95f0d6ae2b3eb       etcd-pause-554447
	14f94689d54d9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   33 seconds ago      Exited              kube-apiserver            1                   9d6cdd7159c57       kube-apiserver-pause-554447
	
	
	==> coredns [59b28fc2b896a53558f83673c614029cb8a1db88928dc151b9287ff813a8dd82] <==
	
	
	==> coredns [7ad94f0cc8e704ef8fee0a9c236ca31425da5910fa0abc7de3545d6fefa33272] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33710 - 17147 "HINFO IN 1689799688632273517.6893539958490244548. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014786329s
	
	
	==> describe nodes <==
	Name:               pause-554447
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-554447
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=pause-554447
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_52_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:52:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-554447
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:54:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:53:40 +0000   Fri, 20 Sep 2024 18:52:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:53:40 +0000   Fri, 20 Sep 2024 18:52:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:53:40 +0000   Fri, 20 Sep 2024 18:52:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:53:40 +0000   Fri, 20 Sep 2024 18:52:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.38
	  Hostname:    pause-554447
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a06f74c9c124fd4a324b67fd1939cc1
	  System UUID:                0a06f74c-9c12-4fd4-a324-b67fd1939cc1
	  Boot ID:                    6d363e35-343d-4666-850a-1830bb0d48d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-sszr2                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     103s
	  kube-system                 etcd-pause-554447                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         109s
	  kube-system                 kube-apiserver-pause-554447             250m (12%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-pause-554447    200m (10%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-p8m8l                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-pause-554447             100m (5%)     0 (0%)      0 (0%)           0 (0%)         109s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 21s                  kube-proxy       
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node pause-554447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node pause-554447 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x7 over 116s)  kubelet          Node pause-554447 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    109s                 kubelet          Node pause-554447 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  109s                 kubelet          Node pause-554447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     109s                 kubelet          Node pause-554447 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  NodeReady                108s                 kubelet          Node pause-554447 status is now: NodeReady
	  Normal  RegisteredNode           104s                 node-controller  Node pause-554447 event: Registered Node pause-554447 in Controller
	  Normal  Starting                 27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)    kubelet          Node pause-554447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)    kubelet          Node pause-554447 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)    kubelet          Node pause-554447 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20s                  node-controller  Node pause-554447 event: Registered Node pause-554447 in Controller
	
	
	==> dmesg <==
	[ +10.460297] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.064995] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069362] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.166079] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.141996] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.317741] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[Sep20 18:52] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.244931] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.071210] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.496670] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[  +0.120931] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.392451] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.140013] kauditd_printk_skb: 21 callbacks suppressed
	[  +9.040785] kauditd_printk_skb: 69 callbacks suppressed
	[Sep20 18:53] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[  +0.266600] systemd-fstab-generator[2308]: Ignoring "noauto" option for root device
	[  +0.285593] systemd-fstab-generator[2396]: Ignoring "noauto" option for root device
	[  +0.206114] systemd-fstab-generator[2413]: Ignoring "noauto" option for root device
	[  +0.539847] systemd-fstab-generator[2566]: Ignoring "noauto" option for root device
	[  +2.075851] systemd-fstab-generator[3137]: Ignoring "noauto" option for root device
	[  +3.089879] systemd-fstab-generator[3259]: Ignoring "noauto" option for root device
	[  +0.081724] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.609946] kauditd_printk_skb: 38 callbacks suppressed
	[  +9.753635] kauditd_printk_skb: 4 callbacks suppressed
	[  +4.575449] systemd-fstab-generator[3695]: Ignoring "noauto" option for root device
	
	
	==> etcd [306ad987a3acfb2818b72735ddbc1e5c74c39b2964697da8e0a180411d79f0ad] <==
	{"level":"warn","ts":"2024-09-20T18:53:30.328650Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-09-20T18:53:30.328791Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.61.38:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.61.38:2380","--initial-cluster=pause-554447=https://192.168.61.38:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.61.38:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.61.38:2380","--name=pause-554447","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-c
a-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-09-20T18:53:30.354678Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-09-20T18:53:30.354771Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-09-20T18:53:30.354796Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.61.38:2380"]}
	{"level":"info","ts":"2024-09-20T18:53:30.354909Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T18:53:30.356186Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.38:2379"]}
	{"level":"info","ts":"2024-09-20T18:53:30.356481Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-554447","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.61.38:2380"],"listen-peer-urls":["https://192.168.61.38:2380"],"advertise-client-urls":["https://192.168.61.38:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.38:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluste
r-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-09-20T18:53:30.372712Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"15.586512ms"}
	{"level":"info","ts":"2024-09-20T18:53:30.390543Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-20T18:53:30.401132Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"ac52cafbc0494bf3","local-member-id":"a85cda6b4b3fcaa2","commit-index":457}
	{"level":"info","ts":"2024-09-20T18:53:30.401318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-20T18:53:30.401432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 became follower at term 2"}
	{"level":"info","ts":"2024-09-20T18:53:30.401452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft a85cda6b4b3fcaa2 [peers: [], term: 2, commit: 457, applied: 0, lastindex: 457, lastterm: 2]"}
	
	
	==> etcd [60fbf90dec670c2cc5f1832e4c3396317b604e471086ee944a7a05144c92123b] <==
	{"level":"info","ts":"2024-09-20T18:53:36.931832Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ac52cafbc0494bf3","local-member-id":"a85cda6b4b3fcaa2","added-peer-id":"a85cda6b4b3fcaa2","added-peer-peer-urls":["https://192.168.61.38:2380"]}
	{"level":"info","ts":"2024-09-20T18:53:36.932017Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ac52cafbc0494bf3","local-member-id":"a85cda6b4b3fcaa2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:53:36.932082Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:53:36.943094Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T18:53:36.944044Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"a85cda6b4b3fcaa2","initial-advertise-peer-urls":["https://192.168.61.38:2380"],"listen-peer-urls":["https://192.168.61.38:2380"],"advertise-client-urls":["https://192.168.61.38:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.38:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T18:53:36.944161Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T18:53:36.944333Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.38:2380"}
	{"level":"info","ts":"2024-09-20T18:53:36.944415Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.38:2380"}
	{"level":"info","ts":"2024-09-20T18:53:38.567686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T18:53:38.567845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T18:53:38.567895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 received MsgPreVoteResp from a85cda6b4b3fcaa2 at term 2"}
	{"level":"info","ts":"2024-09-20T18:53:38.567937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T18:53:38.567966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 received MsgVoteResp from a85cda6b4b3fcaa2 at term 3"}
	{"level":"info","ts":"2024-09-20T18:53:38.568011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T18:53:38.568042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a85cda6b4b3fcaa2 elected leader a85cda6b4b3fcaa2 at term 3"}
	{"level":"info","ts":"2024-09-20T18:53:38.569295Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a85cda6b4b3fcaa2","local-member-attributes":"{Name:pause-554447 ClientURLs:[https://192.168.61.38:2379]}","request-path":"/0/members/a85cda6b4b3fcaa2/attributes","cluster-id":"ac52cafbc0494bf3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:53:38.569467Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:53:38.569589Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:53:38.570597Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:53:38.570634Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:53:38.571410Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:53:38.571353Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:53:38.572295Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:53:38.572759Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.38:2379"}
	{"level":"info","ts":"2024-09-20T18:53:50.005954Z","caller":"traceutil/trace.go:171","msg":"trace[1244357550] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"122.761098ms","start":"2024-09-20T18:53:49.883168Z","end":"2024-09-20T18:53:50.005929Z","steps":["trace[1244357550] 'process raft request'  (duration: 122.619238ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:54:03 up 2 min,  0 users,  load average: 2.20, 0.70, 0.25
	Linux pause-554447 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [14f94689d54d9a49a49151a0add6061df363c8f566e3ac15eb124c728e25a26e] <==
	I0920 18:53:30.411633       1 options.go:228] external host was not specified, using 192.168.61.38
	I0920 18:53:30.415095       1 server.go:142] Version: v1.31.1
	I0920 18:53:30.415141       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:53:30.961154       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0920 18:53:30.962231       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 18:53:30.962347       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0920 18:53:30.977106       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 18:53:30.980873       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0920 18:53:30.980898       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0920 18:53:30.981127       1 instance.go:232] Using reconciler: lease
	W0920 18:53:30.982113       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ff327236bd6632d6c7e181d24c28830c253981dbe576a5d554a2fc44a5d10a4e] <==
	I0920 18:53:40.222102       1 policy_source.go:224] refreshing policies
	I0920 18:53:40.279717       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 18:53:40.279986       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 18:53:40.280560       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 18:53:40.280006       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:53:40.280031       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 18:53:40.285896       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 18:53:40.285982       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 18:53:40.286026       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 18:53:40.286319       1 aggregator.go:171] initial CRD sync complete...
	I0920 18:53:40.286343       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 18:53:40.286350       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 18:53:40.286356       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:53:40.287128       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0920 18:53:40.298075       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0920 18:53:40.302472       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 18:53:40.308544       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 18:53:41.087023       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 18:53:42.467598       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 18:53:42.505068       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 18:53:42.568689       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 18:53:42.621840       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 18:53:42.634961       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 18:53:43.544624       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 18:53:43.940922       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7ade82905607621ba39bf27ef1b0ba512f76fb2e3300b624dc1ce7c2168e6cd9] <==
	
	
	==> kube-controller-manager [ad3d3d2edb34d41873387590960fa51edb1d72c325f9b98eeefcb136cbc41c9c] <==
	I0920 18:53:43.543421       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0920 18:53:43.543664       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-554447"
	I0920 18:53:43.543730       1 shared_informer.go:320] Caches are synced for ephemeral
	I0920 18:53:43.544077       1 shared_informer.go:320] Caches are synced for deployment
	I0920 18:53:43.550801       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0920 18:53:43.552428       1 shared_informer.go:320] Caches are synced for namespace
	I0920 18:53:43.554804       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0920 18:53:43.565221       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0920 18:53:43.566643       1 shared_informer.go:320] Caches are synced for disruption
	I0920 18:53:43.575503       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0920 18:53:43.585423       1 shared_informer.go:320] Caches are synced for HPA
	I0920 18:53:43.603821       1 shared_informer.go:320] Caches are synced for cronjob
	I0920 18:53:43.636829       1 shared_informer.go:320] Caches are synced for job
	I0920 18:53:43.656521       1 shared_informer.go:320] Caches are synced for endpoint
	I0920 18:53:43.662068       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0920 18:53:43.671456       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0920 18:53:43.753255       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:53:43.753276       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:53:43.900088       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="344.782898ms"
	I0920 18:53:43.900225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="76.102µs"
	I0920 18:53:44.187438       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:53:44.196313       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:53:44.196471       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 18:53:51.190099       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="12.073238ms"
	I0920 18:53:51.190235       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="70.664µs"
	
	
	==> kube-proxy [869d49ac806ce100c88b85eed7a0506a0a3ce6ad3234984a07a3c7501a06bf70] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:53:41.727873       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:53:41.742708       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.38"]
	E0920 18:53:41.742820       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:53:41.790765       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:53:41.790841       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:53:41.790888       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:53:41.794565       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:53:41.794926       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:53:41.794951       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:53:41.796614       1 config.go:199] "Starting service config controller"
	I0920 18:53:41.796671       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:53:41.796700       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:53:41.796703       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:53:41.798960       1 config.go:328] "Starting node config controller"
	I0920 18:53:41.799003       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:53:41.897772       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:53:41.897877       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:53:41.899462       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e9eb1fb088d99924633f831fde758f31a45f76fe95fb90b925d61a70a6646849] <==
	
	
	==> kube-scheduler [37c5ce73b96fd4db8562f7833f3c22837f6cb9687c761ff7aa9867050097dab2] <==
	
	
	==> kube-scheduler [5242873eb735319c57a5e041c1d99599cc13bd7890d8de864304c0874fad874e] <==
	I0920 18:53:37.997926       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:53:40.163140       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:53:40.163180       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:53:40.163191       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:53:40.163203       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:53:40.203218       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:53:40.203679       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:53:40.209292       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:53:40.209494       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:53:40.209552       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:53:40.211654       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W0920 18:53:40.230186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0920 18:53:40.230551       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0920 18:53:40.230649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError"
	E0920 18:53:40.230497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError"
	I0920 18:53:40.310431       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:53:36 pause-554447 kubelet[3266]: I0920 18:53:36.456319    3266 scope.go:117] "RemoveContainer" containerID="306ad987a3acfb2818b72735ddbc1e5c74c39b2964697da8e0a180411d79f0ad"
	Sep 20 18:53:36 pause-554447 kubelet[3266]: I0920 18:53:36.460795    3266 scope.go:117] "RemoveContainer" containerID="7ade82905607621ba39bf27ef1b0ba512f76fb2e3300b624dc1ce7c2168e6cd9"
	Sep 20 18:53:36 pause-554447 kubelet[3266]: I0920 18:53:36.463160    3266 scope.go:117] "RemoveContainer" containerID="37c5ce73b96fd4db8562f7833f3c22837f6cb9687c761ff7aa9867050097dab2"
	Sep 20 18:53:36 pause-554447 kubelet[3266]: I0920 18:53:36.467294    3266 scope.go:117] "RemoveContainer" containerID="14f94689d54d9a49a49151a0add6061df363c8f566e3ac15eb124c728e25a26e"
	Sep 20 18:53:36 pause-554447 kubelet[3266]: E0920 18:53:36.631514    3266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-554447?timeout=10s\": dial tcp 192.168.61.38:8443: connect: connection refused" interval="800ms"
	Sep 20 18:53:36 pause-554447 kubelet[3266]: W0920 18:53:36.828075    3266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.61.38:8443: connect: connection refused
	Sep 20 18:53:36 pause-554447 kubelet[3266]: E0920 18:53:36.828160    3266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.61.38:8443: connect: connection refused" logger="UnhandledError"
	Sep 20 18:53:36 pause-554447 kubelet[3266]: I0920 18:53:36.855068    3266 kubelet_node_status.go:72] "Attempting to register node" node="pause-554447"
	Sep 20 18:53:36 pause-554447 kubelet[3266]: E0920 18:53:36.859251    3266 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.38:8443: connect: connection refused" node="pause-554447"
	Sep 20 18:53:37 pause-554447 kubelet[3266]: I0920 18:53:37.661501    3266 kubelet_node_status.go:72] "Attempting to register node" node="pause-554447"
	Sep 20 18:53:40 pause-554447 kubelet[3266]: I0920 18:53:40.269523    3266 kubelet_node_status.go:111] "Node was previously registered" node="pause-554447"
	Sep 20 18:53:40 pause-554447 kubelet[3266]: I0920 18:53:40.270075    3266 kubelet_node_status.go:75] "Successfully registered node" node="pause-554447"
	Sep 20 18:53:40 pause-554447 kubelet[3266]: I0920 18:53:40.270201    3266 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 20 18:53:40 pause-554447 kubelet[3266]: I0920 18:53:40.271737    3266 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 20 18:53:41 pause-554447 kubelet[3266]: I0920 18:53:41.010278    3266 apiserver.go:52] "Watching apiserver"
	Sep 20 18:53:41 pause-554447 kubelet[3266]: I0920 18:53:41.033686    3266 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 20 18:53:41 pause-554447 kubelet[3266]: I0920 18:53:41.100525    3266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e16e6c3-dc2a-4df5-8582-2ebf7026fe87-xtables-lock\") pod \"kube-proxy-p8m8l\" (UID: \"3e16e6c3-dc2a-4df5-8582-2ebf7026fe87\") " pod="kube-system/kube-proxy-p8m8l"
	Sep 20 18:53:41 pause-554447 kubelet[3266]: I0920 18:53:41.100793    3266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e16e6c3-dc2a-4df5-8582-2ebf7026fe87-lib-modules\") pod \"kube-proxy-p8m8l\" (UID: \"3e16e6c3-dc2a-4df5-8582-2ebf7026fe87\") " pod="kube-system/kube-proxy-p8m8l"
	Sep 20 18:53:41 pause-554447 kubelet[3266]: I0920 18:53:41.314777    3266 scope.go:117] "RemoveContainer" containerID="59b28fc2b896a53558f83673c614029cb8a1db88928dc151b9287ff813a8dd82"
	Sep 20 18:53:41 pause-554447 kubelet[3266]: I0920 18:53:41.315194    3266 scope.go:117] "RemoveContainer" containerID="e9eb1fb088d99924633f831fde758f31a45f76fe95fb90b925d61a70a6646849"
	Sep 20 18:53:46 pause-554447 kubelet[3266]: E0920 18:53:46.154294    3266 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858426153348078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:46 pause-554447 kubelet[3266]: E0920 18:53:46.154335    3266 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858426153348078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:51 pause-554447 kubelet[3266]: I0920 18:53:51.153279    3266 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 20 18:53:56 pause-554447 kubelet[3266]: E0920 18:53:56.155990    3266 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858436155574636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:56 pause-554447 kubelet[3266]: E0920 18:53:56.156030    3266 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858436155574636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:54:02.437559  291434 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19679-237658/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-554447 -n pause-554447
helpers_test.go:261: (dbg) Run:  kubectl --context pause-554447 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (92.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (278.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-425599 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-425599 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m37.926645447s)

                                                
                                                
-- stdout --
	* [old-k8s-version-425599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-425599" primary control-plane node in "old-k8s-version-425599" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:55:18.836294  296081 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:55:18.836553  296081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:55:18.836563  296081 out.go:358] Setting ErrFile to fd 2...
	I0920 18:55:18.836568  296081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:55:18.836760  296081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:55:18.837440  296081 out.go:352] Setting JSON to false
	I0920 18:55:18.838773  296081 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9462,"bootTime":1726849057,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:55:18.838923  296081 start.go:139] virtualization: kvm guest
	I0920 18:55:18.841638  296081 out.go:177] * [old-k8s-version-425599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:55:18.843122  296081 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:55:18.843173  296081 notify.go:220] Checking for updates...
	I0920 18:55:18.846095  296081 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:55:18.847759  296081 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:55:18.849227  296081 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:55:18.850784  296081 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:55:18.852367  296081 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:55:18.854255  296081 config.go:182] Loaded profile config "bridge-793540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:55:18.854394  296081 config.go:182] Loaded profile config "enable-default-cni-793540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:55:18.854501  296081 config.go:182] Loaded profile config "flannel-793540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:55:18.854625  296081 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:55:18.895170  296081 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:55:18.896465  296081 start.go:297] selected driver: kvm2
	I0920 18:55:18.896484  296081 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:55:18.896509  296081 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:55:18.897289  296081 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:55:18.897415  296081 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:55:18.914709  296081 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:55:18.914780  296081 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:55:18.915151  296081 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:55:18.915199  296081 cni.go:84] Creating CNI manager for ""
	I0920 18:55:18.915271  296081 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:55:18.915286  296081 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 18:55:18.915368  296081 start.go:340] cluster config:
	{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:55:18.915536  296081 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:55:18.917597  296081 out.go:177] * Starting "old-k8s-version-425599" primary control-plane node in "old-k8s-version-425599" cluster
	I0920 18:55:18.918821  296081 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:55:18.918886  296081 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 18:55:18.918899  296081 cache.go:56] Caching tarball of preloaded images
	I0920 18:55:18.919013  296081 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:55:18.919028  296081 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 18:55:18.919172  296081 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 18:55:18.919213  296081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json: {Name:mkf8afc00fb3d0b674416c7b8a214b174708b830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:55:18.919406  296081 start.go:360] acquireMachinesLock for old-k8s-version-425599: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:55:24.454913  296081 start.go:364] duration metric: took 5.53547385s to acquireMachinesLock for "old-k8s-version-425599"
	I0920 18:55:24.454994  296081 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:55:24.455134  296081 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:55:24.456870  296081 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:55:24.457181  296081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:55:24.457224  296081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:55:24.475916  296081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45417
	I0920 18:55:24.476445  296081 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:55:24.477063  296081 main.go:141] libmachine: Using API Version  1
	I0920 18:55:24.477087  296081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:55:24.477459  296081 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:55:24.477686  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 18:55:24.477861  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 18:55:24.478120  296081 start.go:159] libmachine.API.Create for "old-k8s-version-425599" (driver="kvm2")
	I0920 18:55:24.478150  296081 client.go:168] LocalClient.Create starting
	I0920 18:55:24.478185  296081 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 18:55:24.478227  296081 main.go:141] libmachine: Decoding PEM data...
	I0920 18:55:24.478247  296081 main.go:141] libmachine: Parsing certificate...
	I0920 18:55:24.478327  296081 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 18:55:24.478358  296081 main.go:141] libmachine: Decoding PEM data...
	I0920 18:55:24.478371  296081 main.go:141] libmachine: Parsing certificate...
	I0920 18:55:24.478393  296081 main.go:141] libmachine: Running pre-create checks...
	I0920 18:55:24.478406  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .PreCreateCheck
	I0920 18:55:24.478809  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetConfigRaw
	I0920 18:55:24.479303  296081 main.go:141] libmachine: Creating machine...
	I0920 18:55:24.479322  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .Create
	I0920 18:55:24.479468  296081 main.go:141] libmachine: (old-k8s-version-425599) Creating KVM machine...
	I0920 18:55:24.480956  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found existing default KVM network
	I0920 18:55:24.482869  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:24.482688  296179 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003021b0}
	I0920 18:55:24.482898  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | created network xml: 
	I0920 18:55:24.482908  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | <network>
	I0920 18:55:24.482917  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG |   <name>mk-old-k8s-version-425599</name>
	I0920 18:55:24.482924  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG |   <dns enable='no'/>
	I0920 18:55:24.482932  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG |   
	I0920 18:55:24.482940  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 18:55:24.482948  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG |     <dhcp>
	I0920 18:55:24.482955  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 18:55:24.482963  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG |     </dhcp>
	I0920 18:55:24.482969  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG |   </ip>
	I0920 18:55:24.482976  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG |   
	I0920 18:55:24.482983  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | </network>
	I0920 18:55:24.482992  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | 
	I0920 18:55:24.488492  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | trying to create private KVM network mk-old-k8s-version-425599 192.168.39.0/24...
	I0920 18:55:24.569561  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | private KVM network mk-old-k8s-version-425599 192.168.39.0/24 created
	I0920 18:55:24.569590  296081 main.go:141] libmachine: (old-k8s-version-425599) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599 ...
	I0920 18:55:24.569609  296081 main.go:141] libmachine: (old-k8s-version-425599) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:55:24.569625  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:24.569560  296179 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:55:24.569837  296081 main.go:141] libmachine: (old-k8s-version-425599) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:55:24.881260  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:24.881075  296179 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa...
	I0920 18:55:25.217981  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:25.217805  296179 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/old-k8s-version-425599.rawdisk...
	I0920 18:55:25.218082  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | Writing magic tar header
	I0920 18:55:25.218331  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | Writing SSH key tar header
	I0920 18:55:25.218542  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:25.218453  296179 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599 ...
	I0920 18:55:25.218633  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599
	I0920 18:55:25.218744  296081 main.go:141] libmachine: (old-k8s-version-425599) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599 (perms=drwx------)
	I0920 18:55:25.218767  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 18:55:25.218779  296081 main.go:141] libmachine: (old-k8s-version-425599) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:55:25.218796  296081 main.go:141] libmachine: (old-k8s-version-425599) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 18:55:25.218809  296081 main.go:141] libmachine: (old-k8s-version-425599) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 18:55:25.218841  296081 main.go:141] libmachine: (old-k8s-version-425599) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:55:25.218859  296081 main.go:141] libmachine: (old-k8s-version-425599) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:55:25.218880  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:55:25.218896  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 18:55:25.218908  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:55:25.218920  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:55:25.218929  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | Checking permissions on dir: /home
	I0920 18:55:25.218941  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | Skipping /home - not owner
	I0920 18:55:25.218950  296081 main.go:141] libmachine: (old-k8s-version-425599) Creating domain...
	I0920 18:55:25.220660  296081 main.go:141] libmachine: (old-k8s-version-425599) define libvirt domain using xml: 
	I0920 18:55:25.220693  296081 main.go:141] libmachine: (old-k8s-version-425599) <domain type='kvm'>
	I0920 18:55:25.220705  296081 main.go:141] libmachine: (old-k8s-version-425599)   <name>old-k8s-version-425599</name>
	I0920 18:55:25.220712  296081 main.go:141] libmachine: (old-k8s-version-425599)   <memory unit='MiB'>2200</memory>
	I0920 18:55:25.220720  296081 main.go:141] libmachine: (old-k8s-version-425599)   <vcpu>2</vcpu>
	I0920 18:55:25.220727  296081 main.go:141] libmachine: (old-k8s-version-425599)   <features>
	I0920 18:55:25.220736  296081 main.go:141] libmachine: (old-k8s-version-425599)     <acpi/>
	I0920 18:55:25.220743  296081 main.go:141] libmachine: (old-k8s-version-425599)     <apic/>
	I0920 18:55:25.220757  296081 main.go:141] libmachine: (old-k8s-version-425599)     <pae/>
	I0920 18:55:25.220767  296081 main.go:141] libmachine: (old-k8s-version-425599)     
	I0920 18:55:25.220779  296081 main.go:141] libmachine: (old-k8s-version-425599)   </features>
	I0920 18:55:25.220786  296081 main.go:141] libmachine: (old-k8s-version-425599)   <cpu mode='host-passthrough'>
	I0920 18:55:25.220793  296081 main.go:141] libmachine: (old-k8s-version-425599)   
	I0920 18:55:25.220805  296081 main.go:141] libmachine: (old-k8s-version-425599)   </cpu>
	I0920 18:55:25.220813  296081 main.go:141] libmachine: (old-k8s-version-425599)   <os>
	I0920 18:55:25.220820  296081 main.go:141] libmachine: (old-k8s-version-425599)     <type>hvm</type>
	I0920 18:55:25.220829  296081 main.go:141] libmachine: (old-k8s-version-425599)     <boot dev='cdrom'/>
	I0920 18:55:25.220839  296081 main.go:141] libmachine: (old-k8s-version-425599)     <boot dev='hd'/>
	I0920 18:55:25.220847  296081 main.go:141] libmachine: (old-k8s-version-425599)     <bootmenu enable='no'/>
	I0920 18:55:25.220858  296081 main.go:141] libmachine: (old-k8s-version-425599)   </os>
	I0920 18:55:25.220866  296081 main.go:141] libmachine: (old-k8s-version-425599)   <devices>
	I0920 18:55:25.220872  296081 main.go:141] libmachine: (old-k8s-version-425599)     <disk type='file' device='cdrom'>
	I0920 18:55:25.220885  296081 main.go:141] libmachine: (old-k8s-version-425599)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/boot2docker.iso'/>
	I0920 18:55:25.220897  296081 main.go:141] libmachine: (old-k8s-version-425599)       <target dev='hdc' bus='scsi'/>
	I0920 18:55:25.220905  296081 main.go:141] libmachine: (old-k8s-version-425599)       <readonly/>
	I0920 18:55:25.220912  296081 main.go:141] libmachine: (old-k8s-version-425599)     </disk>
	I0920 18:55:25.220920  296081 main.go:141] libmachine: (old-k8s-version-425599)     <disk type='file' device='disk'>
	I0920 18:55:25.220931  296081 main.go:141] libmachine: (old-k8s-version-425599)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:55:25.220960  296081 main.go:141] libmachine: (old-k8s-version-425599)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/old-k8s-version-425599.rawdisk'/>
	I0920 18:55:25.220988  296081 main.go:141] libmachine: (old-k8s-version-425599)       <target dev='hda' bus='virtio'/>
	I0920 18:55:25.221000  296081 main.go:141] libmachine: (old-k8s-version-425599)     </disk>
	I0920 18:55:25.221006  296081 main.go:141] libmachine: (old-k8s-version-425599)     <interface type='network'>
	I0920 18:55:25.221014  296081 main.go:141] libmachine: (old-k8s-version-425599)       <source network='mk-old-k8s-version-425599'/>
	I0920 18:55:25.221022  296081 main.go:141] libmachine: (old-k8s-version-425599)       <model type='virtio'/>
	I0920 18:55:25.221027  296081 main.go:141] libmachine: (old-k8s-version-425599)     </interface>
	I0920 18:55:25.221034  296081 main.go:141] libmachine: (old-k8s-version-425599)     <interface type='network'>
	I0920 18:55:25.221041  296081 main.go:141] libmachine: (old-k8s-version-425599)       <source network='default'/>
	I0920 18:55:25.221050  296081 main.go:141] libmachine: (old-k8s-version-425599)       <model type='virtio'/>
	I0920 18:55:25.221058  296081 main.go:141] libmachine: (old-k8s-version-425599)     </interface>
	I0920 18:55:25.221069  296081 main.go:141] libmachine: (old-k8s-version-425599)     <serial type='pty'>
	I0920 18:55:25.221079  296081 main.go:141] libmachine: (old-k8s-version-425599)       <target port='0'/>
	I0920 18:55:25.221089  296081 main.go:141] libmachine: (old-k8s-version-425599)     </serial>
	I0920 18:55:25.221098  296081 main.go:141] libmachine: (old-k8s-version-425599)     <console type='pty'>
	I0920 18:55:25.221108  296081 main.go:141] libmachine: (old-k8s-version-425599)       <target type='serial' port='0'/>
	I0920 18:55:25.221116  296081 main.go:141] libmachine: (old-k8s-version-425599)     </console>
	I0920 18:55:25.221126  296081 main.go:141] libmachine: (old-k8s-version-425599)     <rng model='virtio'>
	I0920 18:55:25.221132  296081 main.go:141] libmachine: (old-k8s-version-425599)       <backend model='random'>/dev/random</backend>
	I0920 18:55:25.221143  296081 main.go:141] libmachine: (old-k8s-version-425599)     </rng>
	I0920 18:55:25.221149  296081 main.go:141] libmachine: (old-k8s-version-425599)     
	I0920 18:55:25.221155  296081 main.go:141] libmachine: (old-k8s-version-425599)     
	I0920 18:55:25.221166  296081 main.go:141] libmachine: (old-k8s-version-425599)   </devices>
	I0920 18:55:25.221172  296081 main.go:141] libmachine: (old-k8s-version-425599) </domain>
	I0920 18:55:25.221186  296081 main.go:141] libmachine: (old-k8s-version-425599) 
	I0920 18:55:25.226225  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:05:2e:69 in network default
	I0920 18:55:25.227033  296081 main.go:141] libmachine: (old-k8s-version-425599) Ensuring networks are active...
	I0920 18:55:25.227055  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:25.227958  296081 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network default is active
	I0920 18:55:25.228424  296081 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network mk-old-k8s-version-425599 is active
	I0920 18:55:25.229113  296081 main.go:141] libmachine: (old-k8s-version-425599) Getting domain xml...
	I0920 18:55:25.230146  296081 main.go:141] libmachine: (old-k8s-version-425599) Creating domain...
	I0920 18:55:26.943994  296081 main.go:141] libmachine: (old-k8s-version-425599) Waiting to get IP...
	I0920 18:55:26.945101  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:26.945748  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 18:55:26.945782  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:26.945710  296179 retry.go:31] will retry after 311.403231ms: waiting for machine to come up
	I0920 18:55:27.259648  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:27.260350  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 18:55:27.260373  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:27.260300  296179 retry.go:31] will retry after 385.247321ms: waiting for machine to come up
	I0920 18:55:27.647012  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:27.647675  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 18:55:27.647699  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:27.647633  296179 retry.go:31] will retry after 305.281611ms: waiting for machine to come up
	I0920 18:55:27.954352  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:27.955014  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 18:55:27.955048  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:27.954954  296179 retry.go:31] will retry after 464.857769ms: waiting for machine to come up
	I0920 18:55:28.421606  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:28.422249  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 18:55:28.422284  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:28.422187  296179 retry.go:31] will retry after 721.681786ms: waiting for machine to come up
	I0920 18:55:29.145402  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:29.146147  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 18:55:29.146170  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:29.146105  296179 retry.go:31] will retry after 714.02087ms: waiting for machine to come up
	I0920 18:55:29.861696  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:29.862344  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 18:55:29.862372  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:29.862293  296179 retry.go:31] will retry after 984.516027ms: waiting for machine to come up
	I0920 18:55:30.848232  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:30.848831  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 18:55:30.848857  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:30.848777  296179 retry.go:31] will retry after 1.108407851s: waiting for machine to come up
	I0920 18:55:31.959205  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:31.959752  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 18:55:31.959774  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:31.959698  296179 retry.go:31] will retry after 1.536532157s: waiting for machine to come up
	I0920 18:55:33.497530  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:33.498183  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 18:55:33.498212  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:33.498128  296179 retry.go:31] will retry after 2.082065103s: waiting for machine to come up
	I0920 18:55:35.581599  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:35.582131  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 18:55:35.582153  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:35.582074  296179 retry.go:31] will retry after 2.66141745s: waiting for machine to come up
	I0920 18:55:38.245417  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:38.246107  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 18:55:38.246137  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:38.246052  296179 retry.go:31] will retry after 2.563339564s: waiting for machine to come up
	I0920 18:55:40.810968  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:40.811542  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 18:55:40.811570  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:40.811495  296179 retry.go:31] will retry after 3.711580937s: waiting for machine to come up
	I0920 18:55:44.525096  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:44.525694  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 18:55:44.525720  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 18:55:44.525656  296179 retry.go:31] will retry after 4.191584874s: waiting for machine to come up
	I0920 18:55:48.719456  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:48.720097  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has current primary IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:48.720122  296081 main.go:141] libmachine: (old-k8s-version-425599) Found IP for machine: 192.168.39.53
	I0920 18:55:48.720135  296081 main.go:141] libmachine: (old-k8s-version-425599) Reserving static IP address...
	I0920 18:55:48.720471  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-425599", mac: "52:54:00:d2:a5:70", ip: "192.168.39.53"} in network mk-old-k8s-version-425599
	I0920 18:55:48.809787  296081 main.go:141] libmachine: (old-k8s-version-425599) Reserved static IP address: 192.168.39.53
	I0920 18:55:48.809816  296081 main.go:141] libmachine: (old-k8s-version-425599) Waiting for SSH to be available...
	I0920 18:55:48.809826  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | Getting to WaitForSSH function...
	I0920 18:55:48.813297  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:48.813920  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:48.813954  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:48.814234  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH client type: external
	I0920 18:55:48.814264  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa (-rw-------)
	I0920 18:55:48.814298  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:55:48.814315  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | About to run SSH command:
	I0920 18:55:48.814419  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | exit 0
	I0920 18:55:48.946060  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | SSH cmd err, output: <nil>: 
	I0920 18:55:48.946325  296081 main.go:141] libmachine: (old-k8s-version-425599) KVM machine creation complete!
	I0920 18:55:48.946757  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetConfigRaw
	I0920 18:55:48.947311  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 18:55:48.947533  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 18:55:48.947685  296081 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:55:48.947698  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetState
	I0920 18:55:48.949155  296081 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:55:48.949171  296081 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:55:48.949177  296081 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:55:48.949186  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 18:55:48.952109  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:48.952624  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:48.952656  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:48.952824  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 18:55:48.953023  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:48.953258  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:48.953434  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 18:55:48.953689  296081 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:48.953886  296081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 18:55:48.953898  296081 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:55:49.061480  296081 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:55:49.061504  296081 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:55:49.061511  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 18:55:49.064342  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:49.064785  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:49.064808  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:49.064995  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 18:55:49.065213  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:49.065423  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:49.065635  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 18:55:49.065845  296081 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:49.066040  296081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 18:55:49.066052  296081 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:55:49.166633  296081 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:55:49.166756  296081 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:55:49.166772  296081 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:55:49.166784  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 18:55:49.167055  296081 buildroot.go:166] provisioning hostname "old-k8s-version-425599"
	I0920 18:55:49.167084  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 18:55:49.167307  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 18:55:49.170107  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:49.170479  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:49.170504  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:49.170685  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 18:55:49.170904  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:49.171047  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:49.171184  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 18:55:49.171366  296081 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:49.171576  296081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 18:55:49.171593  296081 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-425599 && echo "old-k8s-version-425599" | sudo tee /etc/hostname
	I0920 18:55:49.292910  296081 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-425599
	
	I0920 18:55:49.292945  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 18:55:49.295708  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:49.296120  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:49.296166  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:49.296412  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 18:55:49.296620  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:49.296798  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:49.296962  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 18:55:49.297176  296081 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:49.297452  296081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 18:55:49.297475  296081 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-425599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-425599/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-425599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:55:49.409448  296081 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:55:49.409487  296081 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 18:55:49.409527  296081 buildroot.go:174] setting up certificates
	I0920 18:55:49.409543  296081 provision.go:84] configureAuth start
	I0920 18:55:49.409561  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 18:55:49.409879  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 18:55:49.413381  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:49.413808  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:49.413847  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:49.414066  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 18:55:49.416877  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:49.417320  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:49.417349  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:49.417492  296081 provision.go:143] copyHostCerts
	I0920 18:55:49.417559  296081 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 18:55:49.417577  296081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 18:55:49.417639  296081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 18:55:49.417829  296081 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 18:55:49.417844  296081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 18:55:49.417875  296081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 18:55:49.417998  296081 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 18:55:49.418010  296081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 18:55:49.418036  296081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 18:55:49.418104  296081 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-425599 san=[127.0.0.1 192.168.39.53 localhost minikube old-k8s-version-425599]
	I0920 18:55:49.665681  296081 provision.go:177] copyRemoteCerts
	I0920 18:55:49.665748  296081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:55:49.665777  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 18:55:49.668982  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:49.669433  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:49.669461  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:49.669677  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 18:55:49.669939  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:49.670111  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 18:55:49.670297  296081 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 18:55:49.752844  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:55:49.778591  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 18:55:49.806412  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:55:49.832230  296081 provision.go:87] duration metric: took 422.667199ms to configureAuth
	I0920 18:55:49.832268  296081 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:55:49.832440  296081 config.go:182] Loaded profile config "old-k8s-version-425599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:55:49.832520  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 18:55:49.836089  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:49.836452  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:49.836489  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:49.836700  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 18:55:49.836950  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:49.837160  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:49.837332  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 18:55:49.837514  296081 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:49.837699  296081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 18:55:49.837715  296081 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:55:50.071576  296081 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:55:50.071612  296081 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:55:50.071623  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetURL
	I0920 18:55:50.073064  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using libvirt version 6000000
	I0920 18:55:50.075356  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:50.075707  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:50.075738  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:50.075989  296081 main.go:141] libmachine: Docker is up and running!
	I0920 18:55:50.076006  296081 main.go:141] libmachine: Reticulating splines...
	I0920 18:55:50.076014  296081 client.go:171] duration metric: took 25.597856053s to LocalClient.Create
	I0920 18:55:50.076032  296081 start.go:167] duration metric: took 25.597917658s to libmachine.API.Create "old-k8s-version-425599"
	I0920 18:55:50.076042  296081 start.go:293] postStartSetup for "old-k8s-version-425599" (driver="kvm2")
	I0920 18:55:50.076051  296081 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:55:50.076068  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 18:55:50.076303  296081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:55:50.076330  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 18:55:50.078872  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:50.079250  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:50.079282  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:50.079428  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 18:55:50.079630  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:50.079793  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 18:55:50.079940  296081 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 18:55:50.164124  296081 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:55:50.168081  296081 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:55:50.168107  296081 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 18:55:50.168167  296081 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 18:55:50.168251  296081 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 18:55:50.168337  296081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:55:50.177176  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:55:50.202210  296081 start.go:296] duration metric: took 126.150276ms for postStartSetup
	I0920 18:55:50.202263  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetConfigRaw
	I0920 18:55:50.202912  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 18:55:50.205988  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:50.206517  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:50.206552  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:50.206913  296081 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 18:55:50.207143  296081 start.go:128] duration metric: took 25.751992006s to createHost
	I0920 18:55:50.207177  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 18:55:50.209726  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:50.210127  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:50.210159  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:50.210309  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 18:55:50.210488  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:50.210677  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:50.210870  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 18:55:50.211027  296081 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:50.211228  296081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 18:55:50.211244  296081 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:55:50.314633  296081 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858550.288142907
	
	I0920 18:55:50.314669  296081 fix.go:216] guest clock: 1726858550.288142907
	I0920 18:55:50.314680  296081 fix.go:229] Guest: 2024-09-20 18:55:50.288142907 +0000 UTC Remote: 2024-09-20 18:55:50.207159097 +0000 UTC m=+31.411807018 (delta=80.98381ms)
	I0920 18:55:50.314703  296081 fix.go:200] guest clock delta is within tolerance: 80.98381ms
	I0920 18:55:50.314709  296081 start.go:83] releasing machines lock for "old-k8s-version-425599", held for 25.859752403s
	I0920 18:55:50.314733  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 18:55:50.315043  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 18:55:50.318268  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:50.318627  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:50.318658  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:50.318833  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 18:55:50.319453  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 18:55:50.319665  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 18:55:50.319767  296081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:55:50.319834  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 18:55:50.319901  296081 ssh_runner.go:195] Run: cat /version.json
	I0920 18:55:50.319922  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 18:55:50.323088  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:50.323489  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:50.323520  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:50.323544  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:50.323787  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 18:55:50.323979  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:50.324001  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:50.324023  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:50.324164  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 18:55:50.324196  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 18:55:50.324350  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 18:55:50.324349  296081 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 18:55:50.324520  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 18:55:50.324682  296081 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 18:55:50.437064  296081 ssh_runner.go:195] Run: systemctl --version
	I0920 18:55:50.444731  296081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:55:50.612200  296081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:55:50.626444  296081 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:55:50.626538  296081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:55:50.645069  296081 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:55:50.645099  296081 start.go:495] detecting cgroup driver to use...
	I0920 18:55:50.645178  296081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:55:50.662943  296081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:55:50.678612  296081 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:55:50.678686  296081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:55:50.693445  296081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:55:50.709588  296081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:55:50.831999  296081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:55:51.022454  296081 docker.go:233] disabling docker service ...
	I0920 18:55:51.022541  296081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:55:51.039473  296081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:55:51.054288  296081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:55:51.207909  296081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:55:51.355820  296081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:55:51.375775  296081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:55:51.397179  296081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 18:55:51.397266  296081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:55:51.409288  296081 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:55:51.409372  296081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:55:51.421543  296081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:55:51.432224  296081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:55:51.444358  296081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:55:51.457081  296081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:55:51.468486  296081 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:55:51.468561  296081 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:55:51.485490  296081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:55:51.496499  296081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:55:51.612479  296081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:55:51.718865  296081 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:55:51.718959  296081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:55:51.724509  296081 start.go:563] Will wait 60s for crictl version
	I0920 18:55:51.724594  296081 ssh_runner.go:195] Run: which crictl
	I0920 18:55:51.729455  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:55:51.784837  296081 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:55:51.784958  296081 ssh_runner.go:195] Run: crio --version
	I0920 18:55:51.822886  296081 ssh_runner.go:195] Run: crio --version
	I0920 18:55:51.861090  296081 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 18:55:51.862478  296081 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 18:55:51.866218  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:51.866677  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 19:55:40 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 18:55:51.866710  296081 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 18:55:51.866991  296081 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:55:51.871984  296081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:55:51.888125  296081 kubeadm.go:883] updating cluster {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:55:51.888235  296081 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:55:51.888277  296081 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:55:51.924240  296081 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:55:51.924329  296081 ssh_runner.go:195] Run: which lz4
	I0920 18:55:51.928746  296081 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:55:51.933302  296081 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:55:51.933350  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 18:55:53.551927  296081 crio.go:462] duration metric: took 1.623215918s to copy over tarball
	I0920 18:55:53.552008  296081 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:55:56.636769  296081 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.084726595s)
	I0920 18:55:56.636804  296081 crio.go:469] duration metric: took 3.084841783s to extract the tarball
	I0920 18:55:56.636814  296081 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:55:56.681811  296081 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:55:56.732407  296081 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 18:55:56.732435  296081 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 18:55:56.732477  296081 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:55:56.732765  296081 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:55:56.732868  296081 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:55:56.732972  296081 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:55:56.733084  296081 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:55:56.733193  296081 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 18:55:56.733320  296081 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:55:56.733417  296081 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 18:55:56.734762  296081 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:55:56.734833  296081 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:55:56.734960  296081 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 18:55:56.735017  296081 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:55:56.735055  296081 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 18:55:56.735132  296081 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:55:56.735225  296081 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:55:56.735233  296081 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:55:56.954654  296081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 18:55:56.994982  296081 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 18:55:56.995039  296081 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 18:55:56.995093  296081 ssh_runner.go:195] Run: which crictl
	I0920 18:55:57.001862  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:55:57.040260  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:55:57.042421  296081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 18:55:57.052092  296081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:55:57.052180  296081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:55:57.056807  296081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:55:57.058901  296081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 18:55:57.078954  296081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:55:57.089044  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 18:55:57.201505  296081 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 18:55:57.201569  296081 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 18:55:57.201632  296081 ssh_runner.go:195] Run: which crictl
	I0920 18:55:57.262404  296081 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 18:55:57.262444  296081 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 18:55:57.262456  296081 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:55:57.262460  296081 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 18:55:57.262479  296081 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:55:57.262483  296081 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:55:57.262514  296081 ssh_runner.go:195] Run: which crictl
	I0920 18:55:57.262528  296081 ssh_runner.go:195] Run: which crictl
	I0920 18:55:57.262546  296081 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 18:55:57.262580  296081 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 18:55:57.262551  296081 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 18:55:57.262601  296081 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:55:57.262615  296081 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 18:55:57.262631  296081 ssh_runner.go:195] Run: which crictl
	I0920 18:55:57.262620  296081 ssh_runner.go:195] Run: which crictl
	I0920 18:55:57.262677  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:55:57.262514  296081 ssh_runner.go:195] Run: which crictl
	I0920 18:55:57.275780  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:55:57.324693  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:55:57.324722  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:55:57.324723  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:55:57.324773  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:55:57.324788  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:55:57.324847  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:55:57.456120  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:55:57.456140  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:55:57.456120  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 18:55:57.456120  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 18:55:57.458793  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:55:57.458832  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:55:57.569021  296081 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 18:55:57.586674  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 18:55:57.586711  296081 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 18:55:57.586758  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 18:55:57.601269  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 18:55:57.601374  296081 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 18:55:57.671584  296081 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 18:55:57.671647  296081 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 18:55:57.683663  296081 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 18:55:57.695560  296081 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 18:55:57.953265  296081 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:55:58.101519  296081 cache_images.go:92] duration metric: took 1.36906322s to LoadCachedImages
	W0920 18:55:58.101621  296081 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 18:55:58.101643  296081 kubeadm.go:934] updating node { 192.168.39.53 8443 v1.20.0 crio true true} ...
	I0920 18:55:58.101772  296081 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-425599 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:55:58.101871  296081 ssh_runner.go:195] Run: crio config
	I0920 18:55:58.154765  296081 cni.go:84] Creating CNI manager for ""
	I0920 18:55:58.154800  296081 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:55:58.154813  296081 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:55:58.154842  296081 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-425599 NodeName:old-k8s-version-425599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 18:55:58.154999  296081 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-425599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:55:58.155062  296081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 18:55:58.165743  296081 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:55:58.165830  296081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:55:58.175823  296081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0920 18:55:58.197671  296081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:55:58.219836  296081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0920 18:55:58.237674  296081 ssh_runner.go:195] Run: grep 192.168.39.53	control-plane.minikube.internal$ /etc/hosts
	I0920 18:55:58.241963  296081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:55:58.254828  296081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:55:58.398268  296081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:55:58.419581  296081 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599 for IP: 192.168.39.53
	I0920 18:55:58.419649  296081 certs.go:194] generating shared ca certs ...
	I0920 18:55:58.419673  296081 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:55:58.419909  296081 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 18:55:58.419968  296081 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 18:55:58.420004  296081 certs.go:256] generating profile certs ...
	I0920 18:55:58.420095  296081 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/client.key
	I0920 18:55:58.420114  296081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/client.crt with IP's: []
	I0920 18:55:58.516056  296081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/client.crt ...
	I0920 18:55:58.516099  296081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/client.crt: {Name:mk9e1da856eaf419bf6d9789c9baaf08b42d0165 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:55:58.516322  296081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/client.key ...
	I0920 18:55:58.516344  296081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/client.key: {Name:mkc0b9cdc694da5c229f099f7e3ccbed481f136f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:55:58.516471  296081 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key.e78cb154
	I0920 18:55:58.516497  296081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.crt.e78cb154 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.53]
	I0920 18:55:58.860100  296081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.crt.e78cb154 ...
	I0920 18:55:58.860140  296081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.crt.e78cb154: {Name:mk0e19baa258bdf2c934e24642835788642b409a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:55:58.860327  296081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key.e78cb154 ...
	I0920 18:55:58.860343  296081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key.e78cb154: {Name:mk2c1865960ee5824e7236fc30964714beb9bc59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:55:58.860419  296081 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.crt.e78cb154 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.crt
	I0920 18:55:58.860492  296081 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key.e78cb154 -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key
	I0920 18:55:58.860544  296081 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key
	I0920 18:55:58.860559  296081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.crt with IP's: []
	I0920 18:55:58.997978  296081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.crt ...
	I0920 18:55:58.998011  296081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.crt: {Name:mkecf9db9be4b52ef3ae54fe09994140c8e8432a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:55:58.998165  296081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key ...
	I0920 18:55:58.998180  296081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key: {Name:mkc6c02551e3bdd6fdb1111b353f25f5169d3bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:55:58.998352  296081 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 18:55:58.998387  296081 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 18:55:58.998398  296081 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:55:58.998419  296081 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:55:58.998443  296081 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:55:58.998463  296081 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 18:55:58.998499  296081 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 18:55:58.999071  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:55:59.029979  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:55:59.061638  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:55:59.086685  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:55:59.112904  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 18:55:59.139512  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:55:59.177295  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:55:59.203534  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:55:59.228481  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 18:55:59.253757  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:55:59.279661  296081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 18:55:59.303791  296081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:55:59.332366  296081 ssh_runner.go:195] Run: openssl version
	I0920 18:55:59.341404  296081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 18:55:59.354456  296081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 18:55:59.359977  296081 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 18:55:59.360045  296081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 18:55:59.366588  296081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:55:59.380524  296081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:55:59.392472  296081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:55:59.397237  296081 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:55:59.397309  296081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:55:59.405405  296081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:55:59.416803  296081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 18:55:59.427972  296081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 18:55:59.432806  296081 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 18:55:59.432905  296081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 18:55:59.440580  296081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 18:55:59.451895  296081 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:55:59.456241  296081 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:55:59.456307  296081 kubeadm.go:392] StartCluster: {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:55:59.456406  296081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:55:59.456476  296081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:55:59.503671  296081 cri.go:89] found id: ""
	I0920 18:55:59.503777  296081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:55:59.514616  296081 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:55:59.524544  296081 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:55:59.535826  296081 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:55:59.535850  296081 kubeadm.go:157] found existing configuration files:
	
	I0920 18:55:59.535897  296081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:55:59.549435  296081 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:55:59.549510  296081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:55:59.562342  296081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:55:59.574015  296081 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:55:59.574086  296081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:55:59.583870  296081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:55:59.593523  296081 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:55:59.593602  296081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:55:59.604098  296081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:55:59.613660  296081 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:55:59.613741  296081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:55:59.623548  296081 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:55:59.765672  296081 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:55:59.765753  296081 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:55:59.912246  296081 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:55:59.912392  296081 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:55:59.912538  296081 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:56:00.119694  296081 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:56:00.202205  296081 out.go:235]   - Generating certificates and keys ...
	I0920 18:56:00.202344  296081 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:56:00.202441  296081 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:56:00.260756  296081 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:56:00.435691  296081 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:56:00.501767  296081 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:56:00.637011  296081 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:56:00.753696  296081 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:56:00.753918  296081 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-425599] and IPs [192.168.39.53 127.0.0.1 ::1]
	I0920 18:56:00.867100  296081 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:56:00.867423  296081 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-425599] and IPs [192.168.39.53 127.0.0.1 ::1]
	I0920 18:56:01.046723  296081 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:56:01.160192  296081 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:56:01.306076  296081 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:56:01.306426  296081 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:56:01.659949  296081 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:56:01.769340  296081 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:56:02.189025  296081 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:56:02.905485  296081 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:56:02.922990  296081 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:56:02.923972  296081 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:56:02.924093  296081 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:56:03.112120  296081 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:56:03.114236  296081 out.go:235]   - Booting up control plane ...
	I0920 18:56:03.114391  296081 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:56:03.130609  296081 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:56:03.132121  296081 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:56:03.133227  296081 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:56:03.139747  296081 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:56:43.133051  296081 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:56:43.133207  296081 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:56:43.133445  296081 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:56:48.134333  296081 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:56:48.134671  296081 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:56:58.133743  296081 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:56:58.134068  296081 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:57:18.133625  296081 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:57:18.133857  296081 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:57:58.136059  296081 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:57:58.136514  296081 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:57:58.136553  296081 kubeadm.go:310] 
	I0920 18:57:58.136641  296081 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:57:58.136744  296081 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:57:58.136768  296081 kubeadm.go:310] 
	I0920 18:57:58.136852  296081 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:57:58.136939  296081 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:57:58.137173  296081 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:57:58.137193  296081 kubeadm.go:310] 
	I0920 18:57:58.137456  296081 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:57:58.137534  296081 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:57:58.137606  296081 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:57:58.137633  296081 kubeadm.go:310] 
	I0920 18:57:58.137963  296081 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:57:58.138173  296081 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:57:58.138185  296081 kubeadm.go:310] 
	I0920 18:57:58.138393  296081 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:57:58.138570  296081 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:57:58.138772  296081 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:57:58.138949  296081 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:57:58.138982  296081 kubeadm.go:310] 
	I0920 18:57:58.139457  296081 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:57:58.139913  296081 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:57:58.140010  296081 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 18:57:58.140113  296081 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-425599] and IPs [192.168.39.53 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-425599] and IPs [192.168.39.53 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-425599] and IPs [192.168.39.53 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-425599] and IPs [192.168.39.53 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 18:57:58.140157  296081 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 18:57:59.376638  296081 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.236453573s)
	I0920 18:57:59.376721  296081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:57:59.390943  296081 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:57:59.400634  296081 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:57:59.400658  296081 kubeadm.go:157] found existing configuration files:
	
	I0920 18:57:59.400709  296081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:57:59.410548  296081 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:57:59.410632  296081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:57:59.420932  296081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:57:59.430821  296081 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:57:59.430888  296081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:57:59.440779  296081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:57:59.450168  296081 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:57:59.450226  296081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:57:59.460227  296081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:57:59.470037  296081 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:57:59.470116  296081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:57:59.480057  296081 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:57:59.695473  296081 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:59:56.052128  296081 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 18:59:56.052264  296081 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 18:59:56.053955  296081 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 18:59:56.054017  296081 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:59:56.054080  296081 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:59:56.054190  296081 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:59:56.054307  296081 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 18:59:56.054404  296081 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:59:56.057171  296081 out.go:235]   - Generating certificates and keys ...
	I0920 18:59:56.057275  296081 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:59:56.057346  296081 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:59:56.057438  296081 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 18:59:56.057528  296081 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 18:59:56.057627  296081 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 18:59:56.057719  296081 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 18:59:56.057835  296081 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 18:59:56.057955  296081 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 18:59:56.058036  296081 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 18:59:56.058122  296081 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 18:59:56.058181  296081 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 18:59:56.058230  296081 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:59:56.058300  296081 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:59:56.058363  296081 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:59:56.058418  296081 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:59:56.058465  296081 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:59:56.058588  296081 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:59:56.058701  296081 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:59:56.058736  296081 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:59:56.058820  296081 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:59:56.060773  296081 out.go:235]   - Booting up control plane ...
	I0920 18:59:56.060907  296081 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:59:56.060993  296081 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:59:56.061068  296081 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:59:56.061151  296081 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:59:56.061296  296081 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 18:59:56.061351  296081 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 18:59:56.061405  296081 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:59:56.061586  296081 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:59:56.061660  296081 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:59:56.061819  296081 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:59:56.061885  296081 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:59:56.062054  296081 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:59:56.062112  296081 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:59:56.062291  296081 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:59:56.062354  296081 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 18:59:56.062509  296081 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 18:59:56.062520  296081 kubeadm.go:310] 
	I0920 18:59:56.062554  296081 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 18:59:56.062592  296081 kubeadm.go:310] 		timed out waiting for the condition
	I0920 18:59:56.062598  296081 kubeadm.go:310] 
	I0920 18:59:56.062635  296081 kubeadm.go:310] 	This error is likely caused by:
	I0920 18:59:56.062666  296081 kubeadm.go:310] 		- The kubelet is not running
	I0920 18:59:56.062778  296081 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 18:59:56.062793  296081 kubeadm.go:310] 
	I0920 18:59:56.062899  296081 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 18:59:56.062952  296081 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 18:59:56.063001  296081 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 18:59:56.063010  296081 kubeadm.go:310] 
	I0920 18:59:56.063151  296081 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 18:59:56.063248  296081 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 18:59:56.063270  296081 kubeadm.go:310] 
	I0920 18:59:56.063369  296081 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 18:59:56.063490  296081 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 18:59:56.063608  296081 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 18:59:56.063713  296081 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 18:59:56.063781  296081 kubeadm.go:394] duration metric: took 3m56.60748231s to StartCluster
	I0920 18:59:56.063805  296081 kubeadm.go:310] 
	I0920 18:59:56.063821  296081 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:59:56.063873  296081 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:59:56.112555  296081 cri.go:89] found id: ""
	I0920 18:59:56.112589  296081 logs.go:276] 0 containers: []
	W0920 18:59:56.112598  296081 logs.go:278] No container was found matching "kube-apiserver"
	I0920 18:59:56.112635  296081 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:59:56.112748  296081 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:59:56.154637  296081 cri.go:89] found id: ""
	I0920 18:59:56.154670  296081 logs.go:276] 0 containers: []
	W0920 18:59:56.154678  296081 logs.go:278] No container was found matching "etcd"
	I0920 18:59:56.154685  296081 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:59:56.154750  296081 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:59:56.194716  296081 cri.go:89] found id: ""
	I0920 18:59:56.194748  296081 logs.go:276] 0 containers: []
	W0920 18:59:56.194757  296081 logs.go:278] No container was found matching "coredns"
	I0920 18:59:56.194763  296081 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:59:56.194828  296081 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:59:56.240796  296081 cri.go:89] found id: ""
	I0920 18:59:56.240828  296081 logs.go:276] 0 containers: []
	W0920 18:59:56.240836  296081 logs.go:278] No container was found matching "kube-scheduler"
	I0920 18:59:56.240843  296081 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:59:56.240895  296081 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:59:56.273465  296081 cri.go:89] found id: ""
	I0920 18:59:56.273495  296081 logs.go:276] 0 containers: []
	W0920 18:59:56.273504  296081 logs.go:278] No container was found matching "kube-proxy"
	I0920 18:59:56.273511  296081 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:59:56.273573  296081 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:59:56.307744  296081 cri.go:89] found id: ""
	I0920 18:59:56.307785  296081 logs.go:276] 0 containers: []
	W0920 18:59:56.307799  296081 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 18:59:56.307809  296081 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:59:56.307881  296081 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:59:56.342150  296081 cri.go:89] found id: ""
	I0920 18:59:56.342186  296081 logs.go:276] 0 containers: []
	W0920 18:59:56.342196  296081 logs.go:278] No container was found matching "kindnet"
	I0920 18:59:56.342207  296081 logs.go:123] Gathering logs for container status ...
	I0920 18:59:56.342222  296081 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:59:56.380519  296081 logs.go:123] Gathering logs for kubelet ...
	I0920 18:59:56.380554  296081 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:59:56.429236  296081 logs.go:123] Gathering logs for dmesg ...
	I0920 18:59:56.429276  296081 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:59:56.444964  296081 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:59:56.444995  296081 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 18:59:56.590875  296081 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 18:59:56.590903  296081 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:59:56.590915  296081 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0920 18:59:56.703798  296081 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 18:59:56.703878  296081 out.go:270] * 
	* 
	W0920 18:59:56.703938  296081 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:59:56.703964  296081 out.go:270] * 
	* 
	W0920 18:59:56.704808  296081 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:59:56.708214  296081 out.go:201] 
	W0920 18:59:56.709689  296081 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 18:59:56.709748  296081 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 18:59:56.709767  296081 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 18:59:56.711543  296081 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-425599 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599: exit status 6 (232.679173ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:59:56.987780  302693 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-425599" does not appear in /home/jenkins/minikube-integration/19679-237658/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-425599" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (278.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-037711 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-037711 --alsologtostderr -v=3: exit status 82 (2m0.558458728s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-037711"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:57:14.935201  301596 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:57:14.935436  301596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:57:14.935467  301596 out.go:358] Setting ErrFile to fd 2...
	I0920 18:57:14.935483  301596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:57:14.935893  301596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:57:14.936589  301596 out.go:352] Setting JSON to false
	I0920 18:57:14.936705  301596 mustload.go:65] Loading cluster: no-preload-037711
	I0920 18:57:14.937205  301596 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:57:14.937312  301596 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/config.json ...
	I0920 18:57:14.937553  301596 mustload.go:65] Loading cluster: no-preload-037711
	I0920 18:57:14.937704  301596 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:57:14.937750  301596 stop.go:39] StopHost: no-preload-037711
	I0920 18:57:14.938359  301596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:57:14.938417  301596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:57:14.954157  301596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37931
	I0920 18:57:14.954981  301596 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:57:14.955670  301596 main.go:141] libmachine: Using API Version  1
	I0920 18:57:14.955718  301596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:57:14.956247  301596 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:57:14.959016  301596 out.go:177] * Stopping node "no-preload-037711"  ...
	I0920 18:57:14.960415  301596 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 18:57:14.960469  301596 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 18:57:14.960789  301596 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 18:57:14.960817  301596 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 18:57:14.963981  301596 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 18:57:14.964445  301596 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 19:56:06 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 18:57:14.964495  301596 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 18:57:14.964674  301596 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 18:57:14.964903  301596 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 18:57:14.965106  301596 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 18:57:14.965278  301596 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 18:57:15.090743  301596 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 18:57:15.151670  301596 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 18:57:15.222162  301596 main.go:141] libmachine: Stopping "no-preload-037711"...
	I0920 18:57:15.222197  301596 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 18:57:15.224003  301596 main.go:141] libmachine: (no-preload-037711) Calling .Stop
	I0920 18:57:15.228302  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 0/120
	I0920 18:57:16.229979  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 1/120
	I0920 18:57:17.231237  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 2/120
	I0920 18:57:18.232792  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 3/120
	I0920 18:57:19.234193  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 4/120
	I0920 18:57:20.235845  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 5/120
	I0920 18:57:21.237352  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 6/120
	I0920 18:57:22.239257  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 7/120
	I0920 18:57:23.240858  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 8/120
	I0920 18:57:24.242425  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 9/120
	I0920 18:57:25.243787  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 10/120
	I0920 18:57:26.245480  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 11/120
	I0920 18:57:27.247018  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 12/120
	I0920 18:57:28.248405  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 13/120
	I0920 18:57:29.249956  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 14/120
	I0920 18:57:30.252089  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 15/120
	I0920 18:57:31.254090  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 16/120
	I0920 18:57:32.256635  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 17/120
	I0920 18:57:33.258468  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 18/120
	I0920 18:57:34.260774  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 19/120
	I0920 18:57:35.263254  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 20/120
	I0920 18:57:36.264782  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 21/120
	I0920 18:57:37.266305  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 22/120
	I0920 18:57:38.267689  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 23/120
	I0920 18:57:39.269113  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 24/120
	I0920 18:57:40.271208  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 25/120
	I0920 18:57:41.272507  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 26/120
	I0920 18:57:42.274838  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 27/120
	I0920 18:57:43.276305  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 28/120
	I0920 18:57:44.277541  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 29/120
	I0920 18:57:45.280210  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 30/120
	I0920 18:57:46.282134  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 31/120
	I0920 18:57:47.283660  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 32/120
	I0920 18:57:48.285250  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 33/120
	I0920 18:57:49.286814  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 34/120
	I0920 18:57:50.289057  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 35/120
	I0920 18:57:51.290708  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 36/120
	I0920 18:57:52.292061  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 37/120
	I0920 18:57:53.293630  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 38/120
	I0920 18:57:54.295207  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 39/120
	I0920 18:57:55.296650  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 40/120
	I0920 18:57:56.298495  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 41/120
	I0920 18:57:57.300651  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 42/120
	I0920 18:57:58.302231  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 43/120
	I0920 18:57:59.304053  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 44/120
	I0920 18:58:00.306296  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 45/120
	I0920 18:58:01.307823  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 46/120
	I0920 18:58:02.309487  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 47/120
	I0920 18:58:03.311333  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 48/120
	I0920 18:58:04.312957  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 49/120
	I0920 18:58:05.314378  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 50/120
	I0920 18:58:06.316254  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 51/120
	I0920 18:58:07.317702  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 52/120
	I0920 18:58:08.319318  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 53/120
	I0920 18:58:09.320783  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 54/120
	I0920 18:58:10.322841  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 55/120
	I0920 18:58:11.324550  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 56/120
	I0920 18:58:12.326347  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 57/120
	I0920 18:58:13.328232  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 58/120
	I0920 18:58:14.329671  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 59/120
	I0920 18:58:15.332155  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 60/120
	I0920 18:58:16.334049  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 61/120
	I0920 18:58:17.335713  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 62/120
	I0920 18:58:18.337291  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 63/120
	I0920 18:58:19.338750  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 64/120
	I0920 18:58:20.341073  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 65/120
	I0920 18:58:21.342761  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 66/120
	I0920 18:58:22.344336  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 67/120
	I0920 18:58:23.345855  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 68/120
	I0920 18:58:24.347393  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 69/120
	I0920 18:58:25.348772  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 70/120
	I0920 18:58:26.350108  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 71/120
	I0920 18:58:27.351414  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 72/120
	I0920 18:58:28.352957  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 73/120
	I0920 18:58:29.354559  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 74/120
	I0920 18:58:30.356857  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 75/120
	I0920 18:58:31.358260  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 76/120
	I0920 18:58:32.359732  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 77/120
	I0920 18:58:33.361200  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 78/120
	I0920 18:58:34.363030  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 79/120
	I0920 18:58:35.364327  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 80/120
	I0920 18:58:36.366158  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 81/120
	I0920 18:58:37.367526  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 82/120
	I0920 18:58:38.369082  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 83/120
	I0920 18:58:39.370685  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 84/120
	I0920 18:58:40.373077  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 85/120
	I0920 18:58:41.374822  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 86/120
	I0920 18:58:42.376223  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 87/120
	I0920 18:58:43.377958  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 88/120
	I0920 18:58:44.379755  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 89/120
	I0920 18:58:45.381342  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 90/120
	I0920 18:58:46.382789  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 91/120
	I0920 18:58:47.384400  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 92/120
	I0920 18:58:48.385869  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 93/120
	I0920 18:58:49.387466  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 94/120
	I0920 18:58:50.390024  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 95/120
	I0920 18:58:51.391735  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 96/120
	I0920 18:58:52.393497  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 97/120
	I0920 18:58:53.395094  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 98/120
	I0920 18:58:54.396427  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 99/120
	I0920 18:58:55.398074  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 100/120
	I0920 18:58:56.399691  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 101/120
	I0920 18:58:57.401089  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 102/120
	I0920 18:58:58.402708  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 103/120
	I0920 18:58:59.404191  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 104/120
	I0920 18:59:00.406364  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 105/120
	I0920 18:59:01.407878  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 106/120
	I0920 18:59:02.409343  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 107/120
	I0920 18:59:03.410842  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 108/120
	I0920 18:59:04.412497  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 109/120
	I0920 18:59:05.414015  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 110/120
	I0920 18:59:06.415943  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 111/120
	I0920 18:59:07.418047  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 112/120
	I0920 18:59:08.419800  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 113/120
	I0920 18:59:09.421293  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 114/120
	I0920 18:59:10.423581  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 115/120
	I0920 18:59:11.424968  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 116/120
	I0920 18:59:12.426648  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 117/120
	I0920 18:59:13.428404  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 118/120
	I0920 18:59:14.430037  301596 main.go:141] libmachine: (no-preload-037711) Waiting for machine to stop 119/120
	I0920 18:59:15.431054  301596 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 18:59:15.431127  301596 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 18:59:15.433034  301596 out.go:201] 
	W0920 18:59:15.434475  301596 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 18:59:15.434496  301596 out.go:270] * 
	* 
	W0920 18:59:15.437380  301596 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:59:15.438612  301596 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-037711 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-037711 -n no-preload-037711
E0920 18:59:15.617008  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:15.623516  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:15.635008  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:15.656508  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:15.698010  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:15.779545  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:15.941130  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:16.263086  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:16.904530  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:18.186123  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:20.747907  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:25.869463  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:26.036031  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-037711 -n no-preload-037711: exit status 3 (18.559151785s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:59:33.998320  302282 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.136:22: connect: no route to host
	E0920 18:59:33.998355  302282 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.136:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-037711" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-339897 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-339897 --alsologtostderr -v=3: exit status 82 (2m0.555931425s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-339897"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:57:32.869452  301793 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:57:32.869728  301793 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:57:32.869742  301793 out.go:358] Setting ErrFile to fd 2...
	I0920 18:57:32.869749  301793 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:57:32.869977  301793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:57:32.870277  301793 out.go:352] Setting JSON to false
	I0920 18:57:32.870358  301793 mustload.go:65] Loading cluster: embed-certs-339897
	I0920 18:57:32.870826  301793 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:57:32.870938  301793 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/config.json ...
	I0920 18:57:32.871096  301793 mustload.go:65] Loading cluster: embed-certs-339897
	I0920 18:57:32.871204  301793 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:57:32.871233  301793 stop.go:39] StopHost: embed-certs-339897
	I0920 18:57:32.871640  301793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:57:32.871692  301793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:57:32.888289  301793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40559
	I0920 18:57:32.888872  301793 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:57:32.889622  301793 main.go:141] libmachine: Using API Version  1
	I0920 18:57:32.889653  301793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:57:32.890102  301793 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:57:32.892550  301793 out.go:177] * Stopping node "embed-certs-339897"  ...
	I0920 18:57:32.894285  301793 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 18:57:32.894328  301793 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 18:57:32.894673  301793 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 18:57:32.894706  301793 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 18:57:32.898243  301793 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 18:57:32.898806  301793 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 19:56:35 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 18:57:32.898831  301793 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 18:57:32.899153  301793 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 18:57:32.899380  301793 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 18:57:32.899544  301793 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 18:57:32.899686  301793 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 18:57:33.021974  301793 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 18:57:33.096736  301793 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 18:57:33.161759  301793 main.go:141] libmachine: Stopping "embed-certs-339897"...
	I0920 18:57:33.161795  301793 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 18:57:33.163669  301793 main.go:141] libmachine: (embed-certs-339897) Calling .Stop
	I0920 18:57:33.168313  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 0/120
	I0920 18:57:34.170099  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 1/120
	I0920 18:57:35.171464  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 2/120
	I0920 18:57:36.173086  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 3/120
	I0920 18:57:37.174493  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 4/120
	I0920 18:57:38.177041  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 5/120
	I0920 18:57:39.178500  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 6/120
	I0920 18:57:40.179964  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 7/120
	I0920 18:57:41.181286  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 8/120
	I0920 18:57:42.182677  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 9/120
	I0920 18:57:43.184344  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 10/120
	I0920 18:57:44.185952  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 11/120
	I0920 18:57:45.187476  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 12/120
	I0920 18:57:46.188735  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 13/120
	I0920 18:57:47.190228  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 14/120
	I0920 18:57:48.192736  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 15/120
	I0920 18:57:49.194472  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 16/120
	I0920 18:57:50.196171  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 17/120
	I0920 18:57:51.197646  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 18/120
	I0920 18:57:52.199138  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 19/120
	I0920 18:57:53.200709  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 20/120
	I0920 18:57:54.202057  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 21/120
	I0920 18:57:55.203547  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 22/120
	I0920 18:57:56.204942  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 23/120
	I0920 18:57:57.206644  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 24/120
	I0920 18:57:58.209121  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 25/120
	I0920 18:57:59.210661  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 26/120
	I0920 18:58:00.212232  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 27/120
	I0920 18:58:01.213808  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 28/120
	I0920 18:58:02.215567  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 29/120
	I0920 18:58:03.217169  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 30/120
	I0920 18:58:04.218742  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 31/120
	I0920 18:58:05.220173  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 32/120
	I0920 18:58:06.221451  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 33/120
	I0920 18:58:07.222903  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 34/120
	I0920 18:58:08.224922  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 35/120
	I0920 18:58:09.226961  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 36/120
	I0920 18:58:10.228286  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 37/120
	I0920 18:58:11.229762  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 38/120
	I0920 18:58:12.231183  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 39/120
	I0920 18:58:13.232868  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 40/120
	I0920 18:58:14.234526  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 41/120
	I0920 18:58:15.236092  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 42/120
	I0920 18:58:16.237979  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 43/120
	I0920 18:58:17.239202  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 44/120
	I0920 18:58:18.241246  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 45/120
	I0920 18:58:19.242734  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 46/120
	I0920 18:58:20.244425  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 47/120
	I0920 18:58:21.246503  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 48/120
	I0920 18:58:22.248167  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 49/120
	I0920 18:58:23.250782  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 50/120
	I0920 18:58:24.252495  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 51/120
	I0920 18:58:25.254065  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 52/120
	I0920 18:58:26.255503  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 53/120
	I0920 18:58:27.257069  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 54/120
	I0920 18:58:28.259399  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 55/120
	I0920 18:58:29.260824  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 56/120
	I0920 18:58:30.262535  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 57/120
	I0920 18:58:31.264300  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 58/120
	I0920 18:58:32.265839  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 59/120
	I0920 18:58:33.267556  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 60/120
	I0920 18:58:34.268981  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 61/120
	I0920 18:58:35.270808  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 62/120
	I0920 18:58:36.272344  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 63/120
	I0920 18:58:37.274089  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 64/120
	I0920 18:58:38.276493  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 65/120
	I0920 18:58:39.278167  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 66/120
	I0920 18:58:40.280039  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 67/120
	I0920 18:58:41.281564  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 68/120
	I0920 18:58:42.283033  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 69/120
	I0920 18:58:43.285502  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 70/120
	I0920 18:58:44.287471  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 71/120
	I0920 18:58:45.288973  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 72/120
	I0920 18:58:46.290744  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 73/120
	I0920 18:58:47.292229  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 74/120
	I0920 18:58:48.294320  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 75/120
	I0920 18:58:49.295881  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 76/120
	I0920 18:58:50.297346  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 77/120
	I0920 18:58:51.299145  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 78/120
	I0920 18:58:52.300506  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 79/120
	I0920 18:58:53.302170  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 80/120
	I0920 18:58:54.303571  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 81/120
	I0920 18:58:55.305149  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 82/120
	I0920 18:58:56.306644  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 83/120
	I0920 18:58:57.308220  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 84/120
	I0920 18:58:58.310582  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 85/120
	I0920 18:58:59.312318  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 86/120
	I0920 18:59:00.314008  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 87/120
	I0920 18:59:01.315682  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 88/120
	I0920 18:59:02.317288  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 89/120
	I0920 18:59:03.318671  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 90/120
	I0920 18:59:04.320487  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 91/120
	I0920 18:59:05.321759  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 92/120
	I0920 18:59:06.323438  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 93/120
	I0920 18:59:07.325124  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 94/120
	I0920 18:59:08.327808  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 95/120
	I0920 18:59:09.329471  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 96/120
	I0920 18:59:10.331228  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 97/120
	I0920 18:59:11.332900  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 98/120
	I0920 18:59:12.334487  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 99/120
	I0920 18:59:13.336004  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 100/120
	I0920 18:59:14.337303  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 101/120
	I0920 18:59:15.339113  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 102/120
	I0920 18:59:16.340600  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 103/120
	I0920 18:59:17.342140  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 104/120
	I0920 18:59:18.344427  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 105/120
	I0920 18:59:19.346122  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 106/120
	I0920 18:59:20.347867  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 107/120
	I0920 18:59:21.349375  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 108/120
	I0920 18:59:22.351032  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 109/120
	I0920 18:59:23.353454  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 110/120
	I0920 18:59:24.354959  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 111/120
	I0920 18:59:25.356545  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 112/120
	I0920 18:59:26.358117  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 113/120
	I0920 18:59:27.361028  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 114/120
	I0920 18:59:28.363412  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 115/120
	I0920 18:59:29.364783  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 116/120
	I0920 18:59:30.366634  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 117/120
	I0920 18:59:31.368195  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 118/120
	I0920 18:59:32.369822  301793 main.go:141] libmachine: (embed-certs-339897) Waiting for machine to stop 119/120
	I0920 18:59:33.370889  301793 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 18:59:33.370954  301793 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 18:59:33.373191  301793 out.go:201] 
	W0920 18:59:33.374908  301793 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 18:59:33.374930  301793 out.go:270] * 
	* 
	W0920 18:59:33.377770  301793 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:59:33.379225  301793 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-339897 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-339897 -n embed-certs-339897
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-339897 -n embed-certs-339897: exit status 3 (18.538377392s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:59:51.918276  302378 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.72:22: connect: no route to host
	E0920 18:59:51.918298  302378 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.72:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-339897" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-612312 --alsologtostderr -v=3
E0920 18:58:04.096070  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:04.102579  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:04.114058  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:04.135569  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:04.177126  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:04.259129  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:04.421361  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:04.743153  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:05.384790  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:06.666916  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:09.228240  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:14.350073  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:24.591989  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:30.941966  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:33.963269  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:33.969772  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:33.981683  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:34.003168  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:34.044738  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:34.126421  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:34.288342  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:34.610603  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:35.252561  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:36.534662  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:39.096546  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:44.218046  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:45.074204  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:54.459870  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:14.941895  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-612312 --alsologtostderr -v=3: exit status 82 (2m0.524022474s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-612312"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:57:51.640413  301989 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:57:51.640526  301989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:57:51.640534  301989 out.go:358] Setting ErrFile to fd 2...
	I0920 18:57:51.640538  301989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:57:51.640715  301989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:57:51.640953  301989 out.go:352] Setting JSON to false
	I0920 18:57:51.641034  301989 mustload.go:65] Loading cluster: default-k8s-diff-port-612312
	I0920 18:57:51.641415  301989 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:57:51.641491  301989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/config.json ...
	I0920 18:57:51.641686  301989 mustload.go:65] Loading cluster: default-k8s-diff-port-612312
	I0920 18:57:51.641794  301989 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:57:51.641845  301989 stop.go:39] StopHost: default-k8s-diff-port-612312
	I0920 18:57:51.642263  301989 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:57:51.642305  301989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:57:51.657423  301989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I0920 18:57:51.657947  301989 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:57:51.658617  301989 main.go:141] libmachine: Using API Version  1
	I0920 18:57:51.658650  301989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:57:51.659037  301989 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:57:51.661648  301989 out.go:177] * Stopping node "default-k8s-diff-port-612312"  ...
	I0920 18:57:51.663086  301989 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 18:57:51.663114  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 18:57:51.663361  301989 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 18:57:51.663397  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 18:57:51.666369  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 18:57:51.666927  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 19:57:00 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 18:57:51.666955  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 18:57:51.667125  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 18:57:51.667328  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 18:57:51.667507  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 18:57:51.667639  301989 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 18:57:51.777409  301989 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 18:57:51.835837  301989 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 18:57:51.893517  301989 main.go:141] libmachine: Stopping "default-k8s-diff-port-612312"...
	I0920 18:57:51.893551  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 18:57:51.895599  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Stop
	I0920 18:57:51.899758  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 0/120
	I0920 18:57:52.901058  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 1/120
	I0920 18:57:53.902595  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 2/120
	I0920 18:57:54.904203  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 3/120
	I0920 18:57:55.905631  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 4/120
	I0920 18:57:56.908034  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 5/120
	I0920 18:57:57.909631  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 6/120
	I0920 18:57:58.911331  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 7/120
	I0920 18:57:59.912956  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 8/120
	I0920 18:58:00.914777  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 9/120
	I0920 18:58:01.917016  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 10/120
	I0920 18:58:02.918672  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 11/120
	I0920 18:58:03.920123  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 12/120
	I0920 18:58:04.921602  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 13/120
	I0920 18:58:05.923143  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 14/120
	I0920 18:58:06.925595  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 15/120
	I0920 18:58:07.927023  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 16/120
	I0920 18:58:08.928647  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 17/120
	I0920 18:58:09.930301  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 18/120
	I0920 18:58:10.931872  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 19/120
	I0920 18:58:11.933815  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 20/120
	I0920 18:58:12.935424  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 21/120
	I0920 18:58:13.937274  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 22/120
	I0920 18:58:14.938772  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 23/120
	I0920 18:58:15.940388  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 24/120
	I0920 18:58:16.942649  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 25/120
	I0920 18:58:17.944655  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 26/120
	I0920 18:58:18.946313  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 27/120
	I0920 18:58:19.947963  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 28/120
	I0920 18:58:20.949714  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 29/120
	I0920 18:58:21.951890  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 30/120
	I0920 18:58:22.953465  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 31/120
	I0920 18:58:23.955184  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 32/120
	I0920 18:58:24.957012  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 33/120
	I0920 18:58:25.958556  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 34/120
	I0920 18:58:26.961058  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 35/120
	I0920 18:58:27.962608  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 36/120
	I0920 18:58:28.964595  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 37/120
	I0920 18:58:29.966234  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 38/120
	I0920 18:58:30.967809  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 39/120
	I0920 18:58:31.969258  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 40/120
	I0920 18:58:32.971092  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 41/120
	I0920 18:58:33.972399  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 42/120
	I0920 18:58:34.973852  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 43/120
	I0920 18:58:35.975399  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 44/120
	I0920 18:58:36.977800  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 45/120
	I0920 18:58:37.979457  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 46/120
	I0920 18:58:38.981047  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 47/120
	I0920 18:58:39.982485  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 48/120
	I0920 18:58:40.984344  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 49/120
	I0920 18:58:41.986291  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 50/120
	I0920 18:58:42.987665  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 51/120
	I0920 18:58:43.989321  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 52/120
	I0920 18:58:44.990783  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 53/120
	I0920 18:58:45.992549  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 54/120
	I0920 18:58:46.994748  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 55/120
	I0920 18:58:47.996363  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 56/120
	I0920 18:58:48.997879  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 57/120
	I0920 18:58:49.999414  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 58/120
	I0920 18:58:51.001147  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 59/120
	I0920 18:58:52.002964  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 60/120
	I0920 18:58:53.004565  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 61/120
	I0920 18:58:54.006160  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 62/120
	I0920 18:58:55.007995  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 63/120
	I0920 18:58:56.009450  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 64/120
	I0920 18:58:57.011883  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 65/120
	I0920 18:58:58.013342  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 66/120
	I0920 18:58:59.015170  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 67/120
	I0920 18:59:00.016871  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 68/120
	I0920 18:59:01.018720  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 69/120
	I0920 18:59:02.021233  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 70/120
	I0920 18:59:03.022701  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 71/120
	I0920 18:59:04.024452  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 72/120
	I0920 18:59:05.026165  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 73/120
	I0920 18:59:06.028103  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 74/120
	I0920 18:59:07.030404  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 75/120
	I0920 18:59:08.032027  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 76/120
	I0920 18:59:09.033619  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 77/120
	I0920 18:59:10.035136  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 78/120
	I0920 18:59:11.036595  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 79/120
	I0920 18:59:12.039251  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 80/120
	I0920 18:59:13.041027  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 81/120
	I0920 18:59:14.043115  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 82/120
	I0920 18:59:15.044705  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 83/120
	I0920 18:59:16.046450  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 84/120
	I0920 18:59:17.048062  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 85/120
	I0920 18:59:18.049626  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 86/120
	I0920 18:59:19.051617  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 87/120
	I0920 18:59:20.053653  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 88/120
	I0920 18:59:21.055169  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 89/120
	I0920 18:59:22.057884  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 90/120
	I0920 18:59:23.059432  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 91/120
	I0920 18:59:24.061325  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 92/120
	I0920 18:59:25.063181  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 93/120
	I0920 18:59:26.064699  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 94/120
	I0920 18:59:27.067154  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 95/120
	I0920 18:59:28.068660  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 96/120
	I0920 18:59:29.070212  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 97/120
	I0920 18:59:30.071801  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 98/120
	I0920 18:59:31.073400  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 99/120
	I0920 18:59:32.075125  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 100/120
	I0920 18:59:33.076649  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 101/120
	I0920 18:59:34.078092  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 102/120
	I0920 18:59:35.079550  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 103/120
	I0920 18:59:36.080983  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 104/120
	I0920 18:59:37.083147  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 105/120
	I0920 18:59:38.085100  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 106/120
	I0920 18:59:39.086665  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 107/120
	I0920 18:59:40.088186  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 108/120
	I0920 18:59:41.089689  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 109/120
	I0920 18:59:42.092167  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 110/120
	I0920 18:59:43.093699  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 111/120
	I0920 18:59:44.095100  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 112/120
	I0920 18:59:45.096957  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 113/120
	I0920 18:59:46.098696  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 114/120
	I0920 18:59:47.101544  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 115/120
	I0920 18:59:48.103106  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 116/120
	I0920 18:59:49.104691  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 117/120
	I0920 18:59:50.106581  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 118/120
	I0920 18:59:51.108334  301989 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for machine to stop 119/120
	I0920 18:59:52.109817  301989 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 18:59:52.109884  301989 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 18:59:52.112020  301989 out.go:201] 
	W0920 18:59:52.113676  301989 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 18:59:52.113696  301989 out.go:270] * 
	* 
	W0920 18:59:52.116414  301989 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:59:52.117864  301989 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-612312 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612312 -n default-k8s-diff-port-612312
E0920 18:59:53.507665  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612312 -n default-k8s-diff-port-612312: exit status 3 (18.487098594s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:00:10.606308  302625 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.230:22: connect: no route to host
	E0920 19:00:10.606333  302625 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.230:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-612312" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-037711 -n no-preload-037711
E0920 18:59:36.111382  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-037711 -n no-preload-037711: exit status 3 (3.167748077s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:59:37.166318  302409 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.136:22: connect: no route to host
	E0920 18:59:37.166341  302409 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.136:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-037711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-037711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153028536s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.136:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-037711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-037711 -n no-preload-037711
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-037711 -n no-preload-037711: exit status 3 (3.062815464s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:59:46.382384  302491 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.136:22: connect: no route to host
	E0920 18:59:46.382413  302491 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.136:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-037711" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-339897 -n embed-certs-339897
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-339897 -n embed-certs-339897: exit status 3 (3.168062132s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:59:55.086401  302595 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.72:22: connect: no route to host
	E0920 18:59:55.086431  302595 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.72:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-339897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0920 18:59:55.904351  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:56.593271  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-339897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153260829s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.72:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-339897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-339897 -n embed-certs-339897
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-339897 -n embed-certs-339897: exit status 3 (3.062574012s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:00:04.302405  302839 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.72:22: connect: no route to host
	E0920 19:00:04.302431  302839 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.72:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-339897" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-425599 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-425599 create -f testdata/busybox.yaml: exit status 1 (48.18231ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-425599" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-425599 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599: exit status 6 (237.807161ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:59:57.276982  302733 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-425599" does not appear in /home/jenkins/minikube-integration/19679-237658/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-425599" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599: exit status 6 (226.920443ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:59:57.501426  302763 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-425599" does not appear in /home/jenkins/minikube-integration/19679-237658/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-425599" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (86.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-425599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0920 18:59:58.629676  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-425599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m26.374879965s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-425599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-425599 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-425599 describe deploy/metrics-server -n kube-system: exit status 1 (45.327652ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-425599" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-425599 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599: exit status 6 (217.714688ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:01:24.144226  303357 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-425599" does not appear in /home/jenkins/minikube-integration/19679-237658/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-425599" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (86.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612312 -n default-k8s-diff-port-612312
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612312 -n default-k8s-diff-port-612312: exit status 3 (3.168364515s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:00:13.774306  302919 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.230:22: connect: no route to host
	E0920 19:00:13.774337  302919 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.230:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-612312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0920 19:00:13.962975  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:13.969427  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:13.980854  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:14.002305  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:14.043847  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:14.125544  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:14.287251  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:14.609024  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:15.251169  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:16.532967  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:19.094314  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-612312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153184136s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.230:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-612312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612312 -n default-k8s-diff-port-612312
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612312 -n default-k8s-diff-port-612312: exit status 3 (3.06246176s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:00:22.990466  303017 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.230:22: connect: no route to host
	E0920 19:00:22.990495  303017 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.230:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-612312" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (724.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-425599 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0920 19:01:31.122747  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:32.333603  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:35.900892  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:59.476157  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:12.085120  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:13.295042  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:29.487257  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:32.237991  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:57.822705  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:04.096410  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:30.942704  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:31.800464  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:33.962638  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:34.007141  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:35.216661  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:01.667606  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:15.618078  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:43.317956  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:48.377114  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:54.013442  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:13.963738  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:16.079708  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:41.664593  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:50.144956  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:51.355501  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:06:17.849326  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:06:19.058068  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:07:29.487198  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:04.096691  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:30.942418  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:33.962857  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:52.562039  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:09:15.617180  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-425599 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m0.81111312s)

                                                
                                                
-- stdout --
	* [old-k8s-version-425599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-425599" primary control-plane node in "old-k8s-version-425599" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-425599" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:01:28.948776  303486 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:01:28.948894  303486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:01:28.948900  303486 out.go:358] Setting ErrFile to fd 2...
	I0920 19:01:28.948906  303486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:01:28.949090  303486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 19:01:28.949637  303486 out.go:352] Setting JSON to false
	I0920 19:01:28.950705  303486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9832,"bootTime":1726849057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:01:28.950809  303486 start.go:139] virtualization: kvm guest
	I0920 19:01:28.953226  303486 out.go:177] * [old-k8s-version-425599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:01:28.955013  303486 notify.go:220] Checking for updates...
	I0920 19:01:28.955065  303486 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 19:01:28.956932  303486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:01:28.959076  303486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:01:28.961116  303486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 19:01:28.963396  303486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:01:28.965428  303486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:01:28.967688  303486 config.go:182] Loaded profile config "old-k8s-version-425599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:01:28.968112  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:01:28.968175  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:01:28.984002  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
	I0920 19:01:28.984552  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:01:28.985260  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:01:28.985291  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:01:28.985715  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:01:28.985972  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:01:28.988070  303486 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 19:01:28.989565  303486 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:01:28.990007  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:01:28.990079  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:01:29.006020  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34059
	I0920 19:01:29.006492  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:01:29.007046  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:01:29.007078  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:01:29.007441  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:01:29.007706  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:01:29.049785  303486 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 19:01:29.051185  303486 start.go:297] selected driver: kvm2
	I0920 19:01:29.051206  303486 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:01:29.051323  303486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:01:29.052030  303486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:01:29.052131  303486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:01:29.068826  303486 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:01:29.069232  303486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:01:29.069262  303486 cni.go:84] Creating CNI manager for ""
	I0920 19:01:29.069297  303486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:01:29.069333  303486 start.go:340] cluster config:
	{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:01:29.069439  303486 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:01:29.071617  303486 out.go:177] * Starting "old-k8s-version-425599" primary control-plane node in "old-k8s-version-425599" cluster
	I0920 19:01:29.073133  303486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:01:29.073174  303486 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 19:01:29.073182  303486 cache.go:56] Caching tarball of preloaded images
	I0920 19:01:29.073269  303486 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:01:29.073285  303486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 19:01:29.073388  303486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 19:01:29.073573  303486 start.go:360] acquireMachinesLock for old-k8s-version-425599: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:05:03.674961  303486 start.go:364] duration metric: took 3m34.601349843s to acquireMachinesLock for "old-k8s-version-425599"
	I0920 19:05:03.675039  303486 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:05:03.675048  303486 fix.go:54] fixHost starting: 
	I0920 19:05:03.675480  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:03.675541  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:03.694201  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I0920 19:05:03.694642  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:03.695198  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:05:03.695221  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:03.695609  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:03.695793  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:03.695935  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetState
	I0920 19:05:03.697838  303486 fix.go:112] recreateIfNeeded on old-k8s-version-425599: state=Stopped err=<nil>
	I0920 19:05:03.697885  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	W0920 19:05:03.698080  303486 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:05:03.700333  303486 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-425599" ...
	I0920 19:05:03.701947  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .Start
	I0920 19:05:03.702184  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring networks are active...
	I0920 19:05:03.703106  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network default is active
	I0920 19:05:03.703645  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network mk-old-k8s-version-425599 is active
	I0920 19:05:03.704152  303486 main.go:141] libmachine: (old-k8s-version-425599) Getting domain xml...
	I0920 19:05:03.704942  303486 main.go:141] libmachine: (old-k8s-version-425599) Creating domain...
	I0920 19:05:05.040094  303486 main.go:141] libmachine: (old-k8s-version-425599) Waiting to get IP...
	I0920 19:05:05.041198  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.041615  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.041711  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.041616  304380 retry.go:31] will retry after 264.073086ms: waiting for machine to come up
	I0920 19:05:05.307229  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.307761  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.307784  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.307713  304380 retry.go:31] will retry after 317.541552ms: waiting for machine to come up
	I0920 19:05:05.627262  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.627903  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.627929  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.627797  304380 retry.go:31] will retry after 432.236037ms: waiting for machine to come up
	I0920 19:05:06.062368  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:06.062842  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:06.062873  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:06.062804  304380 retry.go:31] will retry after 525.683807ms: waiting for machine to come up
	I0920 19:05:06.590915  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:06.591405  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:06.591434  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:06.591355  304380 retry.go:31] will retry after 542.00244ms: waiting for machine to come up
	I0920 19:05:07.135388  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:07.135944  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:07.135998  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:07.135908  304380 retry.go:31] will retry after 886.798885ms: waiting for machine to come up
	I0920 19:05:08.024147  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:08.024684  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:08.024713  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:08.024596  304380 retry.go:31] will retry after 826.869965ms: waiting for machine to come up
	I0920 19:05:08.853176  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:08.853793  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:08.853828  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:08.853736  304380 retry.go:31] will retry after 1.007422775s: waiting for machine to come up
	I0920 19:05:09.863227  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:09.863693  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:09.863721  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:09.863640  304380 retry.go:31] will retry after 1.556199895s: waiting for machine to come up
	I0920 19:05:11.422510  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:11.423244  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:11.423271  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:11.423179  304380 retry.go:31] will retry after 1.670177778s: waiting for machine to come up
	I0920 19:05:13.095982  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:13.096600  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:13.096626  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:13.096545  304380 retry.go:31] will retry after 2.71780554s: waiting for machine to come up
	I0920 19:05:15.815519  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:15.816035  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:15.816065  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:15.815974  304380 retry.go:31] will retry after 2.62788631s: waiting for machine to come up
	I0920 19:05:18.446768  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:18.447219  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:18.447240  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:18.447166  304380 retry.go:31] will retry after 4.025841071s: waiting for machine to come up
	I0920 19:05:22.477850  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.478419  303486 main.go:141] libmachine: (old-k8s-version-425599) Found IP for machine: 192.168.39.53
	I0920 19:05:22.478454  303486 main.go:141] libmachine: (old-k8s-version-425599) Reserving static IP address...
	I0920 19:05:22.478473  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has current primary IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.478983  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "old-k8s-version-425599", mac: "52:54:00:d2:a5:70", ip: "192.168.39.53"} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.479021  303486 main.go:141] libmachine: (old-k8s-version-425599) Reserved static IP address: 192.168.39.53
	I0920 19:05:22.479040  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | skip adding static IP to network mk-old-k8s-version-425599 - found existing host DHCP lease matching {name: "old-k8s-version-425599", mac: "52:54:00:d2:a5:70", ip: "192.168.39.53"}
	I0920 19:05:22.479055  303486 main.go:141] libmachine: (old-k8s-version-425599) Waiting for SSH to be available...
	I0920 19:05:22.479067  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Getting to WaitForSSH function...
	I0920 19:05:22.481118  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.481359  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.481382  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.481556  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH client type: external
	I0920 19:05:22.481570  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa (-rw-------)
	I0920 19:05:22.481600  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:22.481612  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | About to run SSH command:
	I0920 19:05:22.481627  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | exit 0
	I0920 19:05:22.606383  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:22.606783  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetConfigRaw
	I0920 19:05:22.607408  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:22.610155  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.610474  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.610506  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.610784  303486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 19:05:22.611075  303486 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:22.611103  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:22.611332  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.613838  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.614250  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.614283  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.614395  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.614609  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.614776  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.614950  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.615136  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.615331  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.615344  303486 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:22.718330  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:22.718363  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.718651  303486 buildroot.go:166] provisioning hostname "old-k8s-version-425599"
	I0920 19:05:22.718697  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.718913  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.722027  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.722334  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.722370  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.722559  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.722738  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.722909  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.723086  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.723261  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.723473  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.723491  303486 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-425599 && echo "old-k8s-version-425599" | sudo tee /etc/hostname
	I0920 19:05:22.841563  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-425599
	
	I0920 19:05:22.841592  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.844327  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.844716  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.844748  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.844970  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.845154  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.845306  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.845413  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.845570  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.845793  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.845818  303486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-425599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-425599/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-425599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:22.959542  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:22.959572  303486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:22.959615  303486 buildroot.go:174] setting up certificates
	I0920 19:05:22.959625  303486 provision.go:84] configureAuth start
	I0920 19:05:22.959635  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.959972  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:22.962506  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.962845  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.962883  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.963020  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.965352  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.965734  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.965755  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.965936  303486 provision.go:143] copyHostCerts
	I0920 19:05:22.965999  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:22.966018  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:22.966073  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:22.966165  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:22.966173  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:22.966193  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:22.966250  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:22.966257  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:22.966274  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:22.966368  303486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-425599 san=[127.0.0.1 192.168.39.53 localhost minikube old-k8s-version-425599]
	I0920 19:05:23.156245  303486 provision.go:177] copyRemoteCerts
	I0920 19:05:23.156322  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:23.156356  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.159694  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.160062  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.160105  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.160283  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.160467  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.160633  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.160755  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.244439  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:23.271796  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 19:05:23.298124  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:23.323466  303486 provision.go:87] duration metric: took 363.82725ms to configureAuth
	I0920 19:05:23.323496  303486 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:23.323711  303486 config.go:182] Loaded profile config "old-k8s-version-425599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:05:23.323805  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.326985  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.327336  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.327363  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.327573  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.327788  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.328003  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.328161  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.328315  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:23.328492  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:23.328506  303486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:23.559721  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:23.559755  303486 machine.go:96] duration metric: took 948.663189ms to provisionDockerMachine
	I0920 19:05:23.559770  303486 start.go:293] postStartSetup for "old-k8s-version-425599" (driver="kvm2")
	I0920 19:05:23.559781  303486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:23.559812  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.560186  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:23.560225  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.563146  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.563462  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.563491  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.563786  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.564018  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.564214  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.564365  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.645013  303486 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:23.649198  303486 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:23.649230  303486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:23.649300  303486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:23.649416  303486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:23.649544  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:23.659351  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:23.683405  303486 start.go:296] duration metric: took 123.617289ms for postStartSetup
	I0920 19:05:23.683466  303486 fix.go:56] duration metric: took 20.008417985s for fixHost
	I0920 19:05:23.683495  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.686540  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.686962  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.686988  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.687209  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.687445  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.687624  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.687803  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.688001  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:23.688188  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:23.688206  303486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:23.790992  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859123.767729644
	
	I0920 19:05:23.791024  303486 fix.go:216] guest clock: 1726859123.767729644
	I0920 19:05:23.791035  303486 fix.go:229] Guest: 2024-09-20 19:05:23.767729644 +0000 UTC Remote: 2024-09-20 19:05:23.683472425 +0000 UTC m=+234.770765310 (delta=84.257219ms)
	I0920 19:05:23.791061  303486 fix.go:200] guest clock delta is within tolerance: 84.257219ms
	I0920 19:05:23.791068  303486 start.go:83] releasing machines lock for "old-k8s-version-425599", held for 20.116056408s
	I0920 19:05:23.791101  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.791432  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:23.794540  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.795015  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.795048  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.795226  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.795779  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.795992  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.796129  303486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:23.796180  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.796241  303486 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:23.796265  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.799032  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799374  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.799399  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799418  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799540  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.799743  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.799874  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.799890  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.799906  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.800084  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.800077  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.800198  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.800365  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.800514  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.924885  303486 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:23.932642  303486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:24.083860  303486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:24.090360  303486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:24.090444  303486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:24.112281  303486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:24.112310  303486 start.go:495] detecting cgroup driver to use...
	I0920 19:05:24.112383  303486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:24.136600  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:24.154552  303486 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:24.154631  303486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:24.170600  303486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:24.186071  303486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:24.319752  303486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:24.498299  303486 docker.go:233] disabling docker service ...
	I0920 19:05:24.498385  303486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:24.515762  303486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:24.533482  303486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:24.687481  303486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:24.820191  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:24.835255  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:24.856179  303486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 19:05:24.856253  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.868991  303486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:24.869080  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.884074  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.898732  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.911016  303486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:24.922757  303486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:24.937719  303486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:24.937828  303486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:24.955496  303486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:24.966347  303486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:25.114758  303486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:25.226807  303486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:25.226984  303486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:25.234576  303486 start.go:563] Will wait 60s for crictl version
	I0920 19:05:25.234664  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:25.238739  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:25.282242  303486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:25.282344  303486 ssh_runner.go:195] Run: crio --version
	I0920 19:05:25.317733  303486 ssh_runner.go:195] Run: crio --version
	I0920 19:05:25.353767  303486 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 19:05:25.354959  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:25.358179  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:25.358467  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:25.358495  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:25.358739  303486 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:25.362714  303486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:25.375880  303486 kubeadm.go:883] updating cluster {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:25.376024  303486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:05:25.376074  303486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:25.420224  303486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:05:25.420307  303486 ssh_runner.go:195] Run: which lz4
	I0920 19:05:25.424775  303486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:05:25.430102  303486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:05:25.430151  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 19:05:27.014068  303486 crio.go:462] duration metric: took 1.589333502s to copy over tarball
	I0920 19:05:27.014160  303486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:05:30.096727  303486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.082523066s)
	I0920 19:05:30.096778  303486 crio.go:469] duration metric: took 3.082671461s to extract the tarball
	I0920 19:05:30.096789  303486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:05:30.148059  303486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:30.184547  303486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:05:30.184578  303486 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 19:05:30.184672  303486 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:30.184686  303486 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.184711  303486 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.184730  303486 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.184732  303486 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.184686  303486 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 19:05:30.184693  303486 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.184792  303486 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.186558  303486 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:30.186609  303486 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 19:05:30.186607  303486 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.186616  303486 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.186688  303486 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.186698  303486 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.186701  303486 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.186565  303486 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.425283  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 19:05:30.469378  303486 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 19:05:30.469448  303486 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 19:05:30.469514  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.475453  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.493250  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.505003  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.513203  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.514365  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.521729  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.533265  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.580710  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.613984  303486 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 19:05:30.614033  303486 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.614085  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.653094  303486 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 19:05:30.653150  303486 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.653205  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675697  303486 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 19:05:30.675730  303486 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 19:05:30.675752  303486 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.675762  303486 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.675805  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675820  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675805  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.709199  303486 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 19:05:30.709261  303486 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.709310  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.720146  303486 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 19:05:30.720198  303486 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.720233  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.720313  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.720241  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.720374  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.720247  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.737444  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.737487  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 19:05:30.843272  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.843362  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.843366  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.860414  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.860462  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.860430  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.954641  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.982227  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.982263  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:31.041996  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:31.042032  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:31.042650  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:31.042722  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:31.070786  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 19:05:31.120407  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 19:05:31.135751  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 19:05:31.163591  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 19:05:31.164483  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 19:05:31.164587  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 19:05:31.345957  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:31.486337  303486 cache_images.go:92] duration metric: took 1.301737533s to LoadCachedImages
	W0920 19:05:31.486434  303486 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 19:05:31.486452  303486 kubeadm.go:934] updating node { 192.168.39.53 8443 v1.20.0 crio true true} ...
	I0920 19:05:31.486576  303486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-425599 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:05:31.486661  303486 ssh_runner.go:195] Run: crio config
	I0920 19:05:31.544181  303486 cni.go:84] Creating CNI manager for ""
	I0920 19:05:31.544215  303486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:31.544229  303486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:05:31.544257  303486 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-425599 NodeName:old-k8s-version-425599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 19:05:31.544465  303486 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-425599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:05:31.544556  303486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 19:05:31.559445  303486 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:05:31.559542  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:05:31.570446  303486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0920 19:05:31.588741  303486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:05:31.606454  303486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0920 19:05:31.624483  303486 ssh_runner.go:195] Run: grep 192.168.39.53	control-plane.minikube.internal$ /etc/hosts
	I0920 19:05:31.628285  303486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:31.641039  303486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:31.771690  303486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:31.789746  303486 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599 for IP: 192.168.39.53
	I0920 19:05:31.789775  303486 certs.go:194] generating shared ca certs ...
	I0920 19:05:31.789806  303486 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:31.790074  303486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:05:31.790150  303486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:05:31.790165  303486 certs.go:256] generating profile certs ...
	I0920 19:05:31.798117  303486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/client.key
	I0920 19:05:31.798270  303486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key.e78cb154
	I0920 19:05:31.798333  303486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key
	I0920 19:05:31.798499  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:05:31.798543  303486 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:05:31.798557  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:05:31.798608  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:05:31.798659  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:05:31.798692  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:05:31.798748  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:31.799624  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:05:31.843298  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:05:31.877299  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:05:31.909777  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:05:31.947787  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 19:05:31.991175  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:05:32.019393  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:05:32.048475  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:05:32.084354  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:05:32.112161  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:05:32.138991  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:05:32.167653  303486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:05:32.185485  303486 ssh_runner.go:195] Run: openssl version
	I0920 19:05:32.192030  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:05:32.203761  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.209550  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.209650  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.216277  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:05:32.228192  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:05:32.239984  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.244782  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.244848  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.250865  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:05:32.262035  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:05:32.273790  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.279335  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.279414  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.286501  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:05:32.298052  303486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:05:32.303064  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:05:32.309973  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:05:32.316704  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:05:32.323166  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:05:32.330126  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:05:32.336554  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:05:32.343303  303486 kubeadm.go:392] StartCluster: {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:05:32.343413  303486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:05:32.343473  303486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:32.387562  303486 cri.go:89] found id: ""
	I0920 19:05:32.387653  303486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:32.398143  303486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:05:32.398167  303486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:05:32.398222  303486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:05:32.407776  303486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:05:32.409205  303486 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-425599" does not appear in /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:05:32.410267  303486 kubeconfig.go:62] /home/jenkins/minikube-integration/19679-237658/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-425599" cluster setting kubeconfig missing "old-k8s-version-425599" context setting]
	I0920 19:05:32.411776  303486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:32.457074  303486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:05:32.468055  303486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.53
	I0920 19:05:32.468113  303486 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:05:32.468132  303486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:05:32.468211  303486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:32.505151  303486 cri.go:89] found id: ""
	I0920 19:05:32.505241  303486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:05:32.521391  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:05:32.531705  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:05:32.531728  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:05:32.531774  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:05:32.541137  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:05:32.541219  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:05:32.550684  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:05:32.560262  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:05:32.560352  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:05:32.569735  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:05:32.579126  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:05:32.579199  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:05:32.589508  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:05:32.600985  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:05:32.601100  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:05:32.611511  303486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:05:32.622346  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:32.755562  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:33.793472  303486 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.037864747s)
	I0920 19:05:33.793513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:34.021260  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:34.142176  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:34.235507  303486 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:05:34.235618  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:34.736586  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:35.236065  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:35.735783  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:36.236406  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:36.736243  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:37.235994  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:37.736168  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:38.236559  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:38.736139  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:39.236010  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:39.735723  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:40.236003  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:40.735741  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.235689  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.736411  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:42.236028  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:42.735814  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:43.236391  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:43.736174  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:44.235886  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:44.736349  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.235783  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.736619  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:46.236082  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:46.736609  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:47.236078  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:47.736130  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:48.236218  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:48.735858  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:49.236645  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:49.736183  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.236642  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.735713  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:51.235862  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:51.736479  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:52.235726  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:52.735939  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:53.235759  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:53.736290  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:54.235840  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:54.735817  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:55.235812  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:55.736410  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:56.236203  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:56.735713  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:57.235777  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:57.735835  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:58.236448  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:58.736010  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:59.236283  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:59.736440  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:00.236142  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:00.735772  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.236360  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.736388  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:02.236462  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:02.736742  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.236457  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.736705  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:04.236005  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:04.735854  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:05.236716  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:05.736668  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.235839  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.736412  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:07.236224  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:07.735830  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:08.235800  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:08.736645  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:09.236127  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:09.735809  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.236585  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.735863  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:11.236700  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:11.736557  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:12.236483  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:12.735695  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:13.235905  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:13.736128  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:14.236234  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:14.736677  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.236499  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.735667  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:16.235774  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:16.735833  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:17.236149  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:17.735782  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:18.236400  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:18.736460  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:19.236298  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:19.736672  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.236401  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.735810  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:21.235673  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:21.736112  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:22.235998  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:22.736179  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:23.236680  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:23.736388  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:24.236369  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:24.736082  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:25.236694  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:25.736346  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.236075  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.736666  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:27.236418  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:27.736656  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:28.235972  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:28.735743  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:29.236688  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:29.736132  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:30.236404  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:30.735733  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.236364  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.736031  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:32.236457  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:32.735751  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:33.236371  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:33.736474  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:34.236387  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:34.236472  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:34.276702  303486 cri.go:89] found id: ""
	I0920 19:06:34.276735  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.276747  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:34.276758  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:34.276815  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:34.312886  303486 cri.go:89] found id: ""
	I0920 19:06:34.312923  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.312935  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:34.312950  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:34.313024  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:34.347199  303486 cri.go:89] found id: ""
	I0920 19:06:34.347240  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.347250  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:34.347258  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:34.347332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:34.383077  303486 cri.go:89] found id: ""
	I0920 19:06:34.383110  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.383121  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:34.383130  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:34.383202  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:34.421184  303486 cri.go:89] found id: ""
	I0920 19:06:34.421212  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.421222  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:34.421231  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:34.421304  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:34.459964  303486 cri.go:89] found id: ""
	I0920 19:06:34.459998  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.460009  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:34.460018  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:34.460085  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:34.493761  303486 cri.go:89] found id: ""
	I0920 19:06:34.493803  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.493815  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:34.493824  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:34.493894  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:34.534406  303486 cri.go:89] found id: ""
	I0920 19:06:34.534445  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.534457  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:34.534471  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:34.534496  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:34.607256  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:34.607297  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:34.644923  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:34.644953  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:34.693574  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:34.693622  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:34.707703  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:34.707742  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:34.846809  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:37.347895  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:37.377651  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:37.377728  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:37.430034  303486 cri.go:89] found id: ""
	I0920 19:06:37.430071  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.430079  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:37.430087  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:37.430156  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:37.467026  303486 cri.go:89] found id: ""
	I0920 19:06:37.467055  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.467063  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:37.467069  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:37.467120  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:37.505791  303486 cri.go:89] found id: ""
	I0920 19:06:37.505824  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.505835  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:37.505845  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:37.505943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:37.541519  303486 cri.go:89] found id: ""
	I0920 19:06:37.541556  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.541568  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:37.541577  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:37.541633  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:37.576088  303486 cri.go:89] found id: ""
	I0920 19:06:37.576126  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.576137  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:37.576146  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:37.576204  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:37.613039  303486 cri.go:89] found id: ""
	I0920 19:06:37.613074  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.613084  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:37.613091  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:37.613153  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:37.656440  303486 cri.go:89] found id: ""
	I0920 19:06:37.656473  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.656482  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:37.656489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:37.656555  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:37.693247  303486 cri.go:89] found id: ""
	I0920 19:06:37.693283  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.693292  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:37.693302  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:37.693321  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:37.769230  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:37.769280  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:37.811016  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:37.811058  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:37.865729  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:37.865773  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:37.880056  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:37.880094  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:37.956402  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:40.457303  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:40.473769  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:40.473848  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:40.511320  303486 cri.go:89] found id: ""
	I0920 19:06:40.511354  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.511363  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:40.511371  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:40.511433  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:40.547086  303486 cri.go:89] found id: ""
	I0920 19:06:40.547127  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.547138  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:40.547147  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:40.547216  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:40.580969  303486 cri.go:89] found id: ""
	I0920 19:06:40.581010  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.581022  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:40.581035  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:40.581098  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:40.615802  303486 cri.go:89] found id: ""
	I0920 19:06:40.615842  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.615851  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:40.615858  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:40.615931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:40.649398  303486 cri.go:89] found id: ""
	I0920 19:06:40.649444  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.649459  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:40.649467  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:40.649541  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:40.683124  303486 cri.go:89] found id: ""
	I0920 19:06:40.683160  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.683172  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:40.683181  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:40.683251  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:40.718005  303486 cri.go:89] found id: ""
	I0920 19:06:40.718032  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.718040  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:40.718047  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:40.718107  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:40.751965  303486 cri.go:89] found id: ""
	I0920 19:06:40.751992  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.752000  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:40.752010  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:40.752024  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:40.765195  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:40.765234  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:40.842287  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:40.842321  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:40.842338  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:40.928384  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:40.928430  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:40.970207  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:40.970242  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:43.526435  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:43.540582  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:43.540680  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:43.576798  303486 cri.go:89] found id: ""
	I0920 19:06:43.576837  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.576846  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:43.576852  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:43.576916  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:43.615261  303486 cri.go:89] found id: ""
	I0920 19:06:43.615290  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.615298  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:43.615305  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:43.615359  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:43.651214  303486 cri.go:89] found id: ""
	I0920 19:06:43.651251  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.651264  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:43.651277  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:43.651338  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:43.684483  303486 cri.go:89] found id: ""
	I0920 19:06:43.684523  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.684535  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:43.684544  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:43.684614  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:43.720996  303486 cri.go:89] found id: ""
	I0920 19:06:43.721026  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.721035  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:43.721041  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:43.721107  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:43.764445  303486 cri.go:89] found id: ""
	I0920 19:06:43.764482  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.764493  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:43.764501  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:43.764564  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:43.808848  303486 cri.go:89] found id: ""
	I0920 19:06:43.808878  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.808888  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:43.808897  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:43.808968  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:43.845462  303486 cri.go:89] found id: ""
	I0920 19:06:43.845491  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.845500  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:43.845511  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:43.845525  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:43.896550  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:43.896596  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:43.909243  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:43.909272  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:43.987455  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:43.987474  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:43.987491  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:44.063585  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:44.063629  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:46.602859  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:46.617286  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:46.617357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:46.653643  303486 cri.go:89] found id: ""
	I0920 19:06:46.653681  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.653693  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:46.653702  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:46.653778  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:46.691169  303486 cri.go:89] found id: ""
	I0920 19:06:46.691198  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.691206  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:46.691213  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:46.691271  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:46.725498  303486 cri.go:89] found id: ""
	I0920 19:06:46.725527  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.725538  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:46.725545  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:46.725614  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:46.758850  303486 cri.go:89] found id: ""
	I0920 19:06:46.758876  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.758884  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:46.758891  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:46.758942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:46.793648  303486 cri.go:89] found id: ""
	I0920 19:06:46.793683  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.793692  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:46.793699  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:46.793755  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:46.832908  303486 cri.go:89] found id: ""
	I0920 19:06:46.832940  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.832947  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:46.832953  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:46.833019  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:46.866450  303486 cri.go:89] found id: ""
	I0920 19:06:46.866502  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.866513  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:46.866522  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:46.866593  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:46.901966  303486 cri.go:89] found id: ""
	I0920 19:06:46.902001  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.902013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:46.902026  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:46.902041  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:46.948901  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:46.948946  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:46.963489  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:46.963534  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:47.041701  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:47.041722  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:47.041736  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:47.124320  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:47.124364  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:49.664255  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:49.677240  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:49.677322  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:49.712375  303486 cri.go:89] found id: ""
	I0920 19:06:49.712401  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.712409  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:49.712415  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:49.712476  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:49.747682  303486 cri.go:89] found id: ""
	I0920 19:06:49.747713  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.747721  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:49.747727  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:49.747783  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:49.782276  303486 cri.go:89] found id: ""
	I0920 19:06:49.782319  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.782329  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:49.782337  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:49.782400  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:49.822625  303486 cri.go:89] found id: ""
	I0920 19:06:49.822661  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.822672  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:49.822680  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:49.822751  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:49.862159  303486 cri.go:89] found id: ""
	I0920 19:06:49.862192  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.862202  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:49.862212  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:49.862281  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:49.897552  303486 cri.go:89] found id: ""
	I0920 19:06:49.897587  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.897595  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:49.897608  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:49.897667  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:49.931667  303486 cri.go:89] found id: ""
	I0920 19:06:49.931698  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.931709  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:49.931718  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:49.931774  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:49.969206  303486 cri.go:89] found id: ""
	I0920 19:06:49.969236  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.969244  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:49.969254  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:49.969266  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:50.019287  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:50.019328  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:50.033080  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:50.033113  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:50.106415  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:50.106442  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:50.106459  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:50.183710  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:50.183762  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:52.725443  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:52.739293  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:52.739386  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:52.772412  303486 cri.go:89] found id: ""
	I0920 19:06:52.772445  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.772454  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:52.772461  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:52.772528  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:52.811153  303486 cri.go:89] found id: ""
	I0920 19:06:52.811189  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.811197  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:52.811204  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:52.811260  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:52.848709  303486 cri.go:89] found id: ""
	I0920 19:06:52.848740  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.848749  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:52.848755  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:52.848811  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:52.883358  303486 cri.go:89] found id: ""
	I0920 19:06:52.883387  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.883394  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:52.883400  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:52.883455  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:52.917838  303486 cri.go:89] found id: ""
	I0920 19:06:52.917874  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.917893  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:52.917912  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:52.917982  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:52.952340  303486 cri.go:89] found id: ""
	I0920 19:06:52.952378  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.952387  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:52.952396  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:52.952471  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:52.986433  303486 cri.go:89] found id: ""
	I0920 19:06:52.986469  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.986478  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:52.986486  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:52.986582  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:53.024209  303486 cri.go:89] found id: ""
	I0920 19:06:53.024241  303486 logs.go:276] 0 containers: []
	W0920 19:06:53.024249  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:53.024260  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:53.024272  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:53.075336  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:53.075374  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:53.090761  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:53.090802  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:53.167883  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:53.167915  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:53.167933  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:53.242003  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:53.242044  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:55.779107  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:55.793713  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:55.793802  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:55.829411  303486 cri.go:89] found id: ""
	I0920 19:06:55.829441  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.829450  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:55.829456  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:55.829513  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:55.864578  303486 cri.go:89] found id: ""
	I0920 19:06:55.864606  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.864617  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:55.864625  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:55.864686  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:55.897004  303486 cri.go:89] found id: ""
	I0920 19:06:55.897033  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.897041  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:55.897048  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:55.897106  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:55.931019  303486 cri.go:89] found id: ""
	I0920 19:06:55.931055  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.931066  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:55.931076  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:55.931141  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:55.966595  303486 cri.go:89] found id: ""
	I0920 19:06:55.966625  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.966635  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:55.966643  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:55.966693  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:55.999707  303486 cri.go:89] found id: ""
	I0920 19:06:55.999736  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.999747  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:55.999756  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:55.999825  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:56.034323  303486 cri.go:89] found id: ""
	I0920 19:06:56.034361  303486 logs.go:276] 0 containers: []
	W0920 19:06:56.034371  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:56.034377  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:56.034433  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:56.069019  303486 cri.go:89] found id: ""
	I0920 19:06:56.069048  303486 logs.go:276] 0 containers: []
	W0920 19:06:56.069056  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:56.069066  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:56.069077  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:56.122820  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:56.122860  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:56.136924  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:56.136966  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:56.216255  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:56.216284  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:56.216299  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:56.293461  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:56.293506  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:58.831252  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:58.844410  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:58.844474  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:58.877508  303486 cri.go:89] found id: ""
	I0920 19:06:58.877539  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.877547  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:58.877555  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:58.877613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:58.911284  303486 cri.go:89] found id: ""
	I0920 19:06:58.911315  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.911323  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:58.911329  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:58.911382  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:58.944646  303486 cri.go:89] found id: ""
	I0920 19:06:58.944675  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.944682  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:58.944688  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:58.944739  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:58.979752  303486 cri.go:89] found id: ""
	I0920 19:06:58.979787  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.979798  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:58.979807  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:58.979864  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:59.016613  303486 cri.go:89] found id: ""
	I0920 19:06:59.016649  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.016661  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:59.016670  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:59.016735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:59.052012  303486 cri.go:89] found id: ""
	I0920 19:06:59.052039  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.052047  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:59.052054  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:59.052106  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:59.090102  303486 cri.go:89] found id: ""
	I0920 19:06:59.090140  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.090152  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:59.090159  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:59.090213  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:59.128028  303486 cri.go:89] found id: ""
	I0920 19:06:59.128057  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.128068  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:59.128080  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:59.128096  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:59.142966  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:59.143012  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:59.227311  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:59.227336  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:59.227357  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:59.308319  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:59.308366  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:59.347299  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:59.347336  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:01.897644  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:01.912876  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:01.912951  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:01.956550  303486 cri.go:89] found id: ""
	I0920 19:07:01.956679  303486 logs.go:276] 0 containers: []
	W0920 19:07:01.956690  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:01.956700  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:01.956765  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:01.995391  303486 cri.go:89] found id: ""
	I0920 19:07:01.995425  303486 logs.go:276] 0 containers: []
	W0920 19:07:01.995433  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:01.995440  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:01.995501  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:02.031149  303486 cri.go:89] found id: ""
	I0920 19:07:02.031181  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.031193  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:02.031202  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:02.031273  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:02.065856  303486 cri.go:89] found id: ""
	I0920 19:07:02.065885  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.065894  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:02.065924  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:02.065981  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:02.101974  303486 cri.go:89] found id: ""
	I0920 19:07:02.102018  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.102032  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:02.102041  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:02.102115  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:02.138108  303486 cri.go:89] found id: ""
	I0920 19:07:02.138142  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.138151  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:02.138156  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:02.138217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:02.170136  303486 cri.go:89] found id: ""
	I0920 19:07:02.170165  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.170173  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:02.170179  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:02.170244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:02.203944  303486 cri.go:89] found id: ""
	I0920 19:07:02.203969  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.203978  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:02.203991  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:02.204008  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:02.256635  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:02.256679  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:02.270266  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:02.270303  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:02.341145  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:02.341182  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:02.341199  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:02.415133  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:02.415175  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:04.952448  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:04.966632  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:04.966702  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:05.001098  303486 cri.go:89] found id: ""
	I0920 19:07:05.001131  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.001141  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:05.001149  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:05.001217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:05.038160  303486 cri.go:89] found id: ""
	I0920 19:07:05.038186  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.038196  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:05.038202  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:05.038260  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:05.083301  303486 cri.go:89] found id: ""
	I0920 19:07:05.083346  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.083357  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:05.083365  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:05.083436  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:05.118916  303486 cri.go:89] found id: ""
	I0920 19:07:05.118952  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.118964  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:05.118972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:05.119065  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:05.157452  303486 cri.go:89] found id: ""
	I0920 19:07:05.157485  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.157496  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:05.157511  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:05.157587  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:05.197100  303486 cri.go:89] found id: ""
	I0920 19:07:05.197133  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.197143  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:05.197152  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:05.197225  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:05.231286  303486 cri.go:89] found id: ""
	I0920 19:07:05.231317  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.231328  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:05.231336  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:05.231409  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:05.269798  303486 cri.go:89] found id: ""
	I0920 19:07:05.269835  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.269847  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:05.269862  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:05.269882  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:05.310029  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:05.310068  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:05.360493  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:05.360537  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:05.373771  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:05.373815  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:05.449860  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:05.449886  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:05.449924  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:08.034520  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:08.049970  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:08.050040  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:08.084683  303486 cri.go:89] found id: ""
	I0920 19:07:08.084714  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.084724  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:08.084731  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:08.084799  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:08.121150  303486 cri.go:89] found id: ""
	I0920 19:07:08.121176  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.121183  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:08.121190  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:08.121244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:08.157830  303486 cri.go:89] found id: ""
	I0920 19:07:08.157865  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.157877  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:08.157891  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:08.157967  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:08.191040  303486 cri.go:89] found id: ""
	I0920 19:07:08.191082  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.191094  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:08.191102  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:08.191169  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:08.230194  303486 cri.go:89] found id: ""
	I0920 19:07:08.230230  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.230239  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:08.230246  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:08.230304  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:08.268526  303486 cri.go:89] found id: ""
	I0920 19:07:08.268558  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.268566  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:08.268573  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:08.268631  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:08.302383  303486 cri.go:89] found id: ""
	I0920 19:07:08.302411  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.302420  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:08.302428  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:08.302492  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:08.336435  303486 cri.go:89] found id: ""
	I0920 19:07:08.336469  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.336479  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:08.336491  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:08.336505  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:08.418086  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:08.418129  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:08.458355  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:08.458391  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:08.507017  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:08.507062  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:08.522701  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:08.522737  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:08.592777  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:11.093689  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:11.107438  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:11.107503  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:11.139701  303486 cri.go:89] found id: ""
	I0920 19:07:11.139742  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.139755  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:11.139765  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:11.139822  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:11.196143  303486 cri.go:89] found id: ""
	I0920 19:07:11.196182  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.196191  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:11.196197  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:11.196268  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:11.232121  303486 cri.go:89] found id: ""
	I0920 19:07:11.232156  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.232164  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:11.232171  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:11.232238  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:11.267307  303486 cri.go:89] found id: ""
	I0920 19:07:11.267338  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.267349  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:11.267358  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:11.267423  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:11.306583  303486 cri.go:89] found id: ""
	I0920 19:07:11.306614  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.306623  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:11.306631  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:11.306698  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:11.348162  303486 cri.go:89] found id: ""
	I0920 19:07:11.348188  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.348196  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:11.348203  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:11.348257  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:11.383612  303486 cri.go:89] found id: ""
	I0920 19:07:11.383649  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.383660  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:11.383669  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:11.383736  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:11.417538  303486 cri.go:89] found id: ""
	I0920 19:07:11.417575  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.417583  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:11.417593  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:11.417609  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:11.470242  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:11.470282  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:11.485448  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:11.485480  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:11.559466  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:11.559495  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:11.559513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:11.636080  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:11.636133  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:14.177278  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:14.190413  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:14.190483  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:14.224238  303486 cri.go:89] found id: ""
	I0920 19:07:14.224264  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.224272  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:14.224278  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:14.224330  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:14.265253  303486 cri.go:89] found id: ""
	I0920 19:07:14.265285  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.265297  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:14.265304  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:14.265357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:14.300591  303486 cri.go:89] found id: ""
	I0920 19:07:14.300619  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.300633  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:14.300639  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:14.300695  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:14.335638  303486 cri.go:89] found id: ""
	I0920 19:07:14.335669  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.335677  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:14.335683  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:14.335735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:14.369291  303486 cri.go:89] found id: ""
	I0920 19:07:14.369328  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.369336  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:14.369344  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:14.369397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:14.404913  303486 cri.go:89] found id: ""
	I0920 19:07:14.404947  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.404958  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:14.404967  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:14.405034  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:14.438793  303486 cri.go:89] found id: ""
	I0920 19:07:14.438834  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.438845  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:14.438856  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:14.438926  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:14.475268  303486 cri.go:89] found id: ""
	I0920 19:07:14.475297  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.475305  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:14.475321  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:14.475342  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:14.528066  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:14.528126  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:14.542850  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:14.542891  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:14.612772  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:14.612800  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:14.612819  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:14.694528  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:14.694579  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:17.234389  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:17.247479  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:17.247544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:17.285461  303486 cri.go:89] found id: ""
	I0920 19:07:17.285488  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.285496  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:17.285502  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:17.285553  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:17.320580  303486 cri.go:89] found id: ""
	I0920 19:07:17.320606  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.320614  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:17.320620  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:17.320677  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:17.356405  303486 cri.go:89] found id: ""
	I0920 19:07:17.356440  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.356462  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:17.356471  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:17.356526  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:17.391268  303486 cri.go:89] found id: ""
	I0920 19:07:17.391301  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.391309  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:17.391316  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:17.391381  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:17.429886  303486 cri.go:89] found id: ""
	I0920 19:07:17.429938  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.429950  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:17.429959  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:17.430022  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:17.466059  303486 cri.go:89] found id: ""
	I0920 19:07:17.466093  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.466104  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:17.466111  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:17.466176  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:17.501128  303486 cri.go:89] found id: ""
	I0920 19:07:17.501159  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.501168  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:17.501174  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:17.501247  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:17.536969  303486 cri.go:89] found id: ""
	I0920 19:07:17.536999  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.537007  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:17.537016  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:17.537031  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:17.592071  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:17.592119  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:17.609022  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:17.609057  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:17.696393  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:17.696420  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:17.696434  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:17.778077  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:17.778122  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:20.319211  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:20.332158  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:20.332235  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:20.366195  303486 cri.go:89] found id: ""
	I0920 19:07:20.366230  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.366241  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:20.366250  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:20.366313  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:20.401786  303486 cri.go:89] found id: ""
	I0920 19:07:20.401819  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.401829  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:20.401846  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:20.401943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:20.433684  303486 cri.go:89] found id: ""
	I0920 19:07:20.433711  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.433719  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:20.433725  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:20.433783  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:20.469495  303486 cri.go:89] found id: ""
	I0920 19:07:20.469524  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.469535  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:20.469543  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:20.469613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:20.502214  303486 cri.go:89] found id: ""
	I0920 19:07:20.502245  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.502256  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:20.502263  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:20.502329  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:20.535829  303486 cri.go:89] found id: ""
	I0920 19:07:20.535867  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.535879  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:20.535887  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:20.535952  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:20.569605  303486 cri.go:89] found id: ""
	I0920 19:07:20.569635  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.569643  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:20.569654  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:20.569726  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:20.603676  303486 cri.go:89] found id: ""
	I0920 19:07:20.603699  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.603706  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:20.603715  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:20.603726  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:20.656645  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:20.656692  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:20.671077  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:20.671107  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:20.740996  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:20.741028  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:20.741046  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:20.820541  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:20.820592  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:23.362973  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:23.380350  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:23.380432  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:23.423145  303486 cri.go:89] found id: ""
	I0920 19:07:23.423183  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.423193  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:23.423202  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:23.423272  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:23.459019  303486 cri.go:89] found id: ""
	I0920 19:07:23.459057  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.459068  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:23.459077  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:23.459144  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:23.502876  303486 cri.go:89] found id: ""
	I0920 19:07:23.502908  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.502920  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:23.502929  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:23.502994  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:23.538440  303486 cri.go:89] found id: ""
	I0920 19:07:23.538471  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.538481  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:23.538489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:23.538552  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:23.575164  303486 cri.go:89] found id: ""
	I0920 19:07:23.575199  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.575211  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:23.575220  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:23.575296  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:23.610449  303486 cri.go:89] found id: ""
	I0920 19:07:23.610480  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.610489  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:23.610495  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:23.610562  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:23.644164  303486 cri.go:89] found id: ""
	I0920 19:07:23.644195  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.644203  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:23.644209  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:23.644275  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:23.684379  303486 cri.go:89] found id: ""
	I0920 19:07:23.684417  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.684428  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:23.684442  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:23.684459  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:23.762838  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:23.762885  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:23.805616  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:23.805650  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:23.857080  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:23.857122  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:23.870602  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:23.870635  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:23.941187  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:26.441571  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:26.455091  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:26.455185  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:26.489658  303486 cri.go:89] found id: ""
	I0920 19:07:26.489696  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.489707  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:26.489716  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:26.489773  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:26.528829  303486 cri.go:89] found id: ""
	I0920 19:07:26.528865  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.528878  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:26.528886  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:26.528966  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:26.568402  303486 cri.go:89] found id: ""
	I0920 19:07:26.568429  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.568443  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:26.568450  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:26.568503  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:26.606654  303486 cri.go:89] found id: ""
	I0920 19:07:26.606683  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.606693  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:26.606701  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:26.606764  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:26.640825  303486 cri.go:89] found id: ""
	I0920 19:07:26.640856  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.640864  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:26.640871  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:26.640934  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:26.677023  303486 cri.go:89] found id: ""
	I0920 19:07:26.677054  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.677062  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:26.677068  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:26.677123  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:26.712921  303486 cri.go:89] found id: ""
	I0920 19:07:26.712956  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.712964  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:26.712971  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:26.713031  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:26.747750  303486 cri.go:89] found id: ""
	I0920 19:07:26.747778  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.747786  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:26.747796  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:26.747810  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:26.799240  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:26.799283  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:26.813197  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:26.813233  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:26.882751  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:26.882780  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:26.882799  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:26.965108  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:26.965146  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:29.503960  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:29.516601  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:29.516669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:29.555581  303486 cri.go:89] found id: ""
	I0920 19:07:29.555622  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.555632  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:29.555640  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:29.555711  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:29.593858  303486 cri.go:89] found id: ""
	I0920 19:07:29.593885  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.593923  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:29.593937  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:29.593990  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:29.629507  303486 cri.go:89] found id: ""
	I0920 19:07:29.629538  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.629548  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:29.629557  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:29.629616  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:29.662880  303486 cri.go:89] found id: ""
	I0920 19:07:29.662913  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.662921  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:29.662928  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:29.662976  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:29.695422  303486 cri.go:89] found id: ""
	I0920 19:07:29.695448  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.695458  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:29.695466  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:29.695531  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:29.730641  303486 cri.go:89] found id: ""
	I0920 19:07:29.730673  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.730685  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:29.730693  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:29.730756  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:29.764186  303486 cri.go:89] found id: ""
	I0920 19:07:29.764220  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.764229  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:29.764238  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:29.764302  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:29.804146  303486 cri.go:89] found id: ""
	I0920 19:07:29.804174  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.804182  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:29.804191  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:29.804204  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:29.885573  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:29.885633  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:29.924619  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:29.924667  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:29.978187  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:29.978230  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:29.992161  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:29.992190  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:30.069767  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:32.570197  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:32.583160  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:32.583244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:32.620842  303486 cri.go:89] found id: ""
	I0920 19:07:32.620870  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.620881  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:32.620899  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:32.620958  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:32.657169  303486 cri.go:89] found id: ""
	I0920 19:07:32.657205  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.657216  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:32.657225  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:32.657292  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:32.694773  303486 cri.go:89] found id: ""
	I0920 19:07:32.694802  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.694809  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:32.694815  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:32.694882  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:32.733318  303486 cri.go:89] found id: ""
	I0920 19:07:32.733350  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.733360  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:32.733370  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:32.733436  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:32.766019  303486 cri.go:89] found id: ""
	I0920 19:07:32.766052  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.766062  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:32.766070  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:32.766138  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:32.801412  303486 cri.go:89] found id: ""
	I0920 19:07:32.801443  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.801454  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:32.801463  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:32.801533  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:32.833743  303486 cri.go:89] found id: ""
	I0920 19:07:32.833771  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.833779  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:32.833787  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:32.833847  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:32.866775  303486 cri.go:89] found id: ""
	I0920 19:07:32.866803  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.866811  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:32.866821  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:32.866839  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:32.919257  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:32.919310  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:32.933554  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:32.933602  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:33.002657  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:33.002702  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:33.002721  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:33.081271  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:33.081316  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:35.627131  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:35.640958  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:35.641032  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:35.675943  303486 cri.go:89] found id: ""
	I0920 19:07:35.675976  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.675984  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:35.675991  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:35.676044  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:35.710075  303486 cri.go:89] found id: ""
	I0920 19:07:35.710104  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.710116  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:35.710124  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:35.710194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:35.747890  303486 cri.go:89] found id: ""
	I0920 19:07:35.747920  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.747931  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:35.747939  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:35.748004  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:35.786197  303486 cri.go:89] found id: ""
	I0920 19:07:35.786231  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.786242  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:35.786252  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:35.786314  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:35.819109  303486 cri.go:89] found id: ""
	I0920 19:07:35.819146  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.819158  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:35.819168  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:35.819244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:35.853244  303486 cri.go:89] found id: ""
	I0920 19:07:35.853282  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.853292  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:35.853301  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:35.853378  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:35.886864  303486 cri.go:89] found id: ""
	I0920 19:07:35.886897  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.886908  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:35.886917  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:35.886986  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:35.920872  303486 cri.go:89] found id: ""
	I0920 19:07:35.920906  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.920917  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:35.920939  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:35.920957  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:35.998741  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:35.998794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:36.040681  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:36.040720  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:36.095848  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:36.095909  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:36.110903  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:36.110939  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:36.186658  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:38.687762  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:38.701640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:38.701708  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:38.734908  303486 cri.go:89] found id: ""
	I0920 19:07:38.734946  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.734956  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:38.734966  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:38.735031  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:38.768062  303486 cri.go:89] found id: ""
	I0920 19:07:38.768100  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.768112  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:38.768120  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:38.768188  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:38.800881  303486 cri.go:89] found id: ""
	I0920 19:07:38.800915  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.800927  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:38.800936  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:38.801004  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:38.835119  303486 cri.go:89] found id: ""
	I0920 19:07:38.835148  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.835156  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:38.835164  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:38.835223  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:38.872677  303486 cri.go:89] found id: ""
	I0920 19:07:38.872712  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.872723  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:38.872733  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:38.872807  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:38.913921  303486 cri.go:89] found id: ""
	I0920 19:07:38.913955  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.913965  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:38.913972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:38.914029  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:38.951849  303486 cri.go:89] found id: ""
	I0920 19:07:38.951882  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.951893  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:38.951902  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:38.951972  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:38.988117  303486 cri.go:89] found id: ""
	I0920 19:07:38.988149  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.988161  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:38.988177  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:38.988191  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:39.028804  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:39.028843  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:39.083374  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:39.083427  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:39.097434  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:39.097463  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:39.172185  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:39.172213  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:39.172226  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:41.756648  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:41.772358  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:41.772432  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:41.809067  303486 cri.go:89] found id: ""
	I0920 19:07:41.809109  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.809123  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:41.809132  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:41.809191  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:41.853413  303486 cri.go:89] found id: ""
	I0920 19:07:41.853445  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.853457  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:41.853465  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:41.853524  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:41.891536  303486 cri.go:89] found id: ""
	I0920 19:07:41.891569  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.891580  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:41.891588  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:41.891668  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:41.931046  303486 cri.go:89] found id: ""
	I0920 19:07:41.931085  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.931093  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:41.931099  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:41.931155  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:41.968120  303486 cri.go:89] found id: ""
	I0920 19:07:41.968152  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.968164  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:41.968172  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:41.968240  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:42.002478  303486 cri.go:89] found id: ""
	I0920 19:07:42.002512  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.002523  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:42.002532  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:42.002599  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:42.038031  303486 cri.go:89] found id: ""
	I0920 19:07:42.038067  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.038080  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:42.038087  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:42.038150  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:42.072124  303486 cri.go:89] found id: ""
	I0920 19:07:42.072155  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.072166  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:42.072178  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:42.072195  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:42.128217  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:42.128259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:42.142291  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:42.142322  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:42.215278  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:42.215305  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:42.215324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:42.293431  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:42.293476  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:44.836094  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:44.850327  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:44.850397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:44.884595  303486 cri.go:89] found id: ""
	I0920 19:07:44.884624  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.884632  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:44.884639  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:44.884711  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:44.917727  303486 cri.go:89] found id: ""
	I0920 19:07:44.917754  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.917763  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:44.917769  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:44.917837  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:44.955821  303486 cri.go:89] found id: ""
	I0920 19:07:44.955860  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.955871  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:44.955879  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:44.955937  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:44.994543  303486 cri.go:89] found id: ""
	I0920 19:07:44.994579  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.994590  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:44.994598  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:44.994651  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:45.031839  303486 cri.go:89] found id: ""
	I0920 19:07:45.031877  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.031888  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:45.031896  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:45.031962  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:45.070554  303486 cri.go:89] found id: ""
	I0920 19:07:45.070588  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.070601  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:45.070609  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:45.070678  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:45.108727  303486 cri.go:89] found id: ""
	I0920 19:07:45.108760  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.108771  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:45.108779  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:45.108855  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:45.144045  303486 cri.go:89] found id: ""
	I0920 19:07:45.144075  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.144083  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:45.144094  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:45.144108  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:45.185800  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:45.185834  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:45.238364  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:45.238410  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:45.252111  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:45.252145  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:45.329009  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:45.329036  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:45.329051  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:47.912910  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:47.926378  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:47.926458  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:47.961067  303486 cri.go:89] found id: ""
	I0920 19:07:47.961094  303486 logs.go:276] 0 containers: []
	W0920 19:07:47.961103  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:47.961111  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:47.961172  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:48.006680  303486 cri.go:89] found id: ""
	I0920 19:07:48.006717  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.006729  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:48.006738  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:48.006805  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:48.042230  303486 cri.go:89] found id: ""
	I0920 19:07:48.042261  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.042272  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:48.042281  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:48.042349  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:48.080779  303486 cri.go:89] found id: ""
	I0920 19:07:48.080836  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.080850  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:48.080860  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:48.080931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:48.119439  303486 cri.go:89] found id: ""
	I0920 19:07:48.119469  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.119477  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:48.119483  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:48.119536  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:48.156219  303486 cri.go:89] found id: ""
	I0920 19:07:48.156258  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.156269  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:48.156279  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:48.156354  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:48.192112  303486 cri.go:89] found id: ""
	I0920 19:07:48.192151  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.192162  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:48.192170  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:48.192240  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:48.228916  303486 cri.go:89] found id: ""
	I0920 19:07:48.228958  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.228968  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:48.228981  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:48.229003  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:48.284073  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:48.284115  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:48.297677  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:48.297713  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:48.374834  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:48.374860  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:48.374876  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:48.455468  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:48.455512  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:50.998354  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:51.012827  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:51.012904  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:51.046701  303486 cri.go:89] found id: ""
	I0920 19:07:51.046739  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.046750  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:51.046758  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:51.046827  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:51.083829  303486 cri.go:89] found id: ""
	I0920 19:07:51.083867  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.083878  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:51.083891  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:51.083965  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:51.124126  303486 cri.go:89] found id: ""
	I0920 19:07:51.124170  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.124180  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:51.124187  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:51.124254  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:51.159141  303486 cri.go:89] found id: ""
	I0920 19:07:51.159175  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.159184  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:51.159190  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:51.159253  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:51.192793  303486 cri.go:89] found id: ""
	I0920 19:07:51.192829  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.192840  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:51.192863  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:51.192938  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:51.225489  303486 cri.go:89] found id: ""
	I0920 19:07:51.225515  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.225524  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:51.225530  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:51.225582  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:51.258256  303486 cri.go:89] found id: ""
	I0920 19:07:51.258283  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.258294  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:51.258301  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:51.258363  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:51.292474  303486 cri.go:89] found id: ""
	I0920 19:07:51.292504  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.292512  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:51.292522  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:51.292537  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:51.331386  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:51.331422  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:51.385136  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:51.385182  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:51.400792  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:51.400828  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:51.492771  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:51.492795  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:51.492810  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:54.074889  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:54.088453  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:54.088534  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:54.125096  303486 cri.go:89] found id: ""
	I0920 19:07:54.125138  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.125159  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:54.125166  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:54.125231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:54.159630  303486 cri.go:89] found id: ""
	I0920 19:07:54.159665  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.159676  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:54.159685  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:54.159759  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:54.195919  303486 cri.go:89] found id: ""
	I0920 19:07:54.195951  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.195965  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:54.195972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:54.196042  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:54.230294  303486 cri.go:89] found id: ""
	I0920 19:07:54.230323  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.230332  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:54.230339  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:54.230396  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:54.266764  303486 cri.go:89] found id: ""
	I0920 19:07:54.266793  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.266800  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:54.266807  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:54.266865  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:54.300704  303486 cri.go:89] found id: ""
	I0920 19:07:54.300731  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.300741  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:54.300750  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:54.300817  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:54.334447  303486 cri.go:89] found id: ""
	I0920 19:07:54.334473  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.334480  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:54.334487  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:54.334546  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:54.369814  303486 cri.go:89] found id: ""
	I0920 19:07:54.369858  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.369866  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:54.369878  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:54.369890  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:54.423088  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:54.423135  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:54.436770  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:54.436801  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:54.510731  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:54.510757  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:54.510773  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:54.593041  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:54.593091  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:57.134030  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:57.147605  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:57.147674  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:57.202662  303486 cri.go:89] found id: ""
	I0920 19:07:57.202690  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.202699  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:57.202705  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:57.202757  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:57.236448  303486 cri.go:89] found id: ""
	I0920 19:07:57.236476  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.236484  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:57.236493  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:57.236558  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:57.269450  303486 cri.go:89] found id: ""
	I0920 19:07:57.269478  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.269485  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:57.269491  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:57.269544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:57.305749  303486 cri.go:89] found id: ""
	I0920 19:07:57.305784  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.305795  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:57.305806  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:57.305877  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:57.339802  303486 cri.go:89] found id: ""
	I0920 19:07:57.339844  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.339857  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:57.339866  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:57.339942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:57.371929  303486 cri.go:89] found id: ""
	I0920 19:07:57.371962  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.371971  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:57.371980  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:57.372051  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:57.405749  303486 cri.go:89] found id: ""
	I0920 19:07:57.405789  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.405802  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:57.405812  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:57.405888  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:57.439259  303486 cri.go:89] found id: ""
	I0920 19:07:57.439291  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.439300  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:57.439310  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:57.439323  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:57.491405  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:57.491450  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:57.505992  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:57.506027  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:57.580598  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:57.580623  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:57.580638  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:57.659475  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:57.659513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:00.201478  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:00.217162  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:00.217228  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:00.252219  303486 cri.go:89] found id: ""
	I0920 19:08:00.252247  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.252256  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:00.252263  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:00.252334  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:00.287244  303486 cri.go:89] found id: ""
	I0920 19:08:00.287283  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.287295  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:00.287302  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:00.287367  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:00.325785  303486 cri.go:89] found id: ""
	I0920 19:08:00.325818  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.325829  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:00.325839  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:00.325931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:00.359718  303486 cri.go:89] found id: ""
	I0920 19:08:00.359747  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.359757  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:00.359766  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:00.359847  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:00.399105  303486 cri.go:89] found id: ""
	I0920 19:08:00.399147  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.399156  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:00.399163  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:00.399227  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:00.433647  303486 cri.go:89] found id: ""
	I0920 19:08:00.433675  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.433683  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:00.433692  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:00.433756  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:00.467771  303486 cri.go:89] found id: ""
	I0920 19:08:00.467820  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.467832  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:00.467841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:00.467911  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:00.511320  303486 cri.go:89] found id: ""
	I0920 19:08:00.511363  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.511376  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:00.511392  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:00.511414  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:00.594669  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:00.594703  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:00.594723  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:00.672747  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:00.672800  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:00.710001  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:00.710049  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:00.760333  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:00.760378  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:03.274393  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:03.289260  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:03.289352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:03.327884  303486 cri.go:89] found id: ""
	I0920 19:08:03.327919  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.327932  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:03.327942  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:03.328015  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:03.367259  303486 cri.go:89] found id: ""
	I0920 19:08:03.367289  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.367297  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:03.367303  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:03.367361  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:03.405843  303486 cri.go:89] found id: ""
	I0920 19:08:03.405899  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.405932  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:03.405942  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:03.406056  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:03.441026  303486 cri.go:89] found id: ""
	I0920 19:08:03.441058  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.441069  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:03.441078  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:03.441147  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:03.477213  303486 cri.go:89] found id: ""
	I0920 19:08:03.477249  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.477261  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:03.477327  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:03.477415  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:03.515843  303486 cri.go:89] found id: ""
	I0920 19:08:03.515880  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.515888  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:03.515895  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:03.515945  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:03.566972  303486 cri.go:89] found id: ""
	I0920 19:08:03.567009  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.567020  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:03.567028  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:03.567097  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:03.616957  303486 cri.go:89] found id: ""
	I0920 19:08:03.617000  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.617013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:03.617029  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:03.617048  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:03.683140  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:03.683192  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:03.697225  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:03.697267  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:03.770430  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:03.770455  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:03.770478  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:03.848796  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:03.848836  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:06.387706  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:06.401600  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:06.401669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:06.437854  303486 cri.go:89] found id: ""
	I0920 19:08:06.437890  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.437917  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:06.437926  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:06.437993  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:06.472617  303486 cri.go:89] found id: ""
	I0920 19:08:06.472647  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.472655  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:06.472662  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:06.472718  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:06.510083  303486 cri.go:89] found id: ""
	I0920 19:08:06.510118  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.510131  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:06.510140  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:06.510212  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:06.546388  303486 cri.go:89] found id: ""
	I0920 19:08:06.546418  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.546427  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:06.546434  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:06.546485  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:06.584043  303486 cri.go:89] found id: ""
	I0920 19:08:06.584084  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.584096  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:06.584106  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:06.584182  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:06.622118  303486 cri.go:89] found id: ""
	I0920 19:08:06.622147  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.622155  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:06.622161  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:06.622217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:06.655513  303486 cri.go:89] found id: ""
	I0920 19:08:06.655552  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.655585  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:06.655593  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:06.655657  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:06.690286  303486 cri.go:89] found id: ""
	I0920 19:08:06.690324  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.690336  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:06.690350  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:06.690368  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:06.729229  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:06.729259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:06.780368  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:06.780411  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:06.794746  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:06.794782  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:06.866918  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:06.866944  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:06.866967  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:09.451583  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:09.465111  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:09.465178  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:09.497679  303486 cri.go:89] found id: ""
	I0920 19:08:09.497713  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.497725  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:09.497733  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:09.497797  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:09.535297  303486 cri.go:89] found id: ""
	I0920 19:08:09.535334  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.535345  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:09.535353  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:09.535427  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:09.572449  303486 cri.go:89] found id: ""
	I0920 19:08:09.572482  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.572491  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:09.572498  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:09.572608  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:09.612672  303486 cri.go:89] found id: ""
	I0920 19:08:09.612697  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.612705  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:09.612711  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:09.612797  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:09.654366  303486 cri.go:89] found id: ""
	I0920 19:08:09.654399  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.654408  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:09.654415  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:09.654470  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:09.694825  303486 cri.go:89] found id: ""
	I0920 19:08:09.694858  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.694870  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:09.694878  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:09.694942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:09.731618  303486 cri.go:89] found id: ""
	I0920 19:08:09.731682  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.731693  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:09.731702  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:09.731775  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:09.766717  303486 cri.go:89] found id: ""
	I0920 19:08:09.766755  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.766765  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:09.766779  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:09.766794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:09.823505  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:09.823549  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:09.837622  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:09.837658  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:09.919105  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:09.919139  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:09.919156  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:10.000899  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:10.000943  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:12.542974  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:12.557265  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:12.557335  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:12.594099  303486 cri.go:89] found id: ""
	I0920 19:08:12.594126  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.594134  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:12.594140  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:12.594199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:12.627271  303486 cri.go:89] found id: ""
	I0920 19:08:12.627301  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.627308  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:12.627314  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:12.627366  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:12.661225  303486 cri.go:89] found id: ""
	I0920 19:08:12.661256  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.661265  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:12.661272  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:12.661332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:12.701381  303486 cri.go:89] found id: ""
	I0920 19:08:12.701424  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.701437  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:12.701447  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:12.701524  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:12.739189  303486 cri.go:89] found id: ""
	I0920 19:08:12.739227  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.739235  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:12.739246  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:12.739299  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:12.780931  303486 cri.go:89] found id: ""
	I0920 19:08:12.780958  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.781055  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:12.781068  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:12.781124  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:12.818097  303486 cri.go:89] found id: ""
	I0920 19:08:12.818137  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.818150  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:12.818161  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:12.818294  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:12.852925  303486 cri.go:89] found id: ""
	I0920 19:08:12.852957  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.852965  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:12.852975  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:12.852990  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:12.924746  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:12.924774  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:12.924791  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:13.005668  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:13.005718  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:13.044327  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:13.044359  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:13.094788  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:13.094833  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:15.611965  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:15.625857  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:15.625960  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:15.662138  303486 cri.go:89] found id: ""
	I0920 19:08:15.662169  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.662177  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:15.662184  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:15.662261  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:15.696000  303486 cri.go:89] found id: ""
	I0920 19:08:15.696067  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.696100  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:15.696115  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:15.696234  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:15.735594  303486 cri.go:89] found id: ""
	I0920 19:08:15.735625  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.735633  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:15.735640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:15.735699  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:15.774666  303486 cri.go:89] found id: ""
	I0920 19:08:15.774693  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.774703  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:15.774712  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:15.774777  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:15.810754  303486 cri.go:89] found id: ""
	I0920 19:08:15.810799  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.810811  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:15.810820  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:15.810884  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:15.846709  303486 cri.go:89] found id: ""
	I0920 19:08:15.846739  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.846748  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:15.846757  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:15.846819  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:15.880798  303486 cri.go:89] found id: ""
	I0920 19:08:15.880825  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.880833  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:15.880839  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:15.880895  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:15.915119  303486 cri.go:89] found id: ""
	I0920 19:08:15.915150  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.915159  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:15.915170  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:15.915186  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:15.966048  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:15.966087  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:15.979287  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:15.979322  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:16.052129  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:16.052163  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:16.052180  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:16.137743  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:16.137788  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:18.678389  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:18.693073  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:18.693152  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:18.734909  303486 cri.go:89] found id: ""
	I0920 19:08:18.734943  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.734954  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:18.734962  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:18.735028  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:18.773472  303486 cri.go:89] found id: ""
	I0920 19:08:18.773506  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.773517  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:18.773525  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:18.773620  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:18.812184  303486 cri.go:89] found id: ""
	I0920 19:08:18.812218  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.812228  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:18.812236  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:18.812305  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:18.846569  303486 cri.go:89] found id: ""
	I0920 19:08:18.846608  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.846619  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:18.846627  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:18.846700  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:18.881794  303486 cri.go:89] found id: ""
	I0920 19:08:18.881836  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.881862  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:18.881870  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:18.881943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:18.919657  303486 cri.go:89] found id: ""
	I0920 19:08:18.919688  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.919698  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:18.919708  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:18.919774  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:18.955117  303486 cri.go:89] found id: ""
	I0920 19:08:18.955146  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.955157  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:18.955166  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:18.955243  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:18.992389  303486 cri.go:89] found id: ""
	I0920 19:08:18.992422  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.992430  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:18.992444  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:18.992460  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:19.070374  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:19.070417  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:19.110793  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:19.110825  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:19.163783  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:19.163830  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:19.177348  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:19.177387  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:19.249469  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:21.749644  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:21.764920  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:21.765006  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:21.803443  303486 cri.go:89] found id: ""
	I0920 19:08:21.803473  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.803481  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:21.803489  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:21.803545  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:21.844552  303486 cri.go:89] found id: ""
	I0920 19:08:21.844582  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.844593  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:21.844601  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:21.844672  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:21.878979  303486 cri.go:89] found id: ""
	I0920 19:08:21.879007  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.879017  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:21.879029  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:21.879099  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:21.915745  303486 cri.go:89] found id: ""
	I0920 19:08:21.915773  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.915783  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:21.915794  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:21.915865  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:21.948999  303486 cri.go:89] found id: ""
	I0920 19:08:21.949031  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.949043  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:21.949052  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:21.949118  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:21.984238  303486 cri.go:89] found id: ""
	I0920 19:08:21.984269  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.984277  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:21.984284  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:21.984357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:22.018581  303486 cri.go:89] found id: ""
	I0920 19:08:22.018610  303486 logs.go:276] 0 containers: []
	W0920 19:08:22.018620  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:22.018628  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:22.018694  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:22.051868  303486 cri.go:89] found id: ""
	I0920 19:08:22.051903  303486 logs.go:276] 0 containers: []
	W0920 19:08:22.051913  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:22.051925  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:22.051942  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:22.106711  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:22.106756  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:22.120910  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:22.120940  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:22.196564  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:22.196591  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:22.196608  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:22.275235  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:22.275288  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:24.821956  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:24.836846  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:24.836918  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:24.878371  303486 cri.go:89] found id: ""
	I0920 19:08:24.878398  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.878406  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:24.878413  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:24.878464  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:24.911450  303486 cri.go:89] found id: ""
	I0920 19:08:24.911480  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.911489  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:24.911497  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:24.911590  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:24.949248  303486 cri.go:89] found id: ""
	I0920 19:08:24.949281  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.949289  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:24.949298  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:24.949352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:24.987899  303486 cri.go:89] found id: ""
	I0920 19:08:24.987932  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.987939  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:24.987948  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:24.988011  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:25.020589  303486 cri.go:89] found id: ""
	I0920 19:08:25.020627  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.020638  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:25.020646  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:25.020701  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:25.060223  303486 cri.go:89] found id: ""
	I0920 19:08:25.060250  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.060258  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:25.060266  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:25.060331  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:25.099111  303486 cri.go:89] found id: ""
	I0920 19:08:25.099141  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.099151  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:25.099160  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:25.099242  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:25.136055  303486 cri.go:89] found id: ""
	I0920 19:08:25.136089  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.136098  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:25.136118  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:25.136135  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:25.187619  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:25.187658  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:25.200983  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:25.201016  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:25.270746  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:25.270778  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:25.270795  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:25.350009  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:25.350050  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:27.889864  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:27.903156  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:27.903231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:27.935087  303486 cri.go:89] found id: ""
	I0920 19:08:27.935118  303486 logs.go:276] 0 containers: []
	W0920 19:08:27.935128  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:27.935138  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:27.935199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:27.970451  303486 cri.go:89] found id: ""
	I0920 19:08:27.970479  303486 logs.go:276] 0 containers: []
	W0920 19:08:27.970487  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:27.970494  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:27.970545  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:28.004931  303486 cri.go:89] found id: ""
	I0920 19:08:28.004980  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.004992  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:28.005002  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:28.005068  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:28.039438  303486 cri.go:89] found id: ""
	I0920 19:08:28.039470  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.039478  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:28.039485  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:28.039535  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:28.076023  303486 cri.go:89] found id: ""
	I0920 19:08:28.076050  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.076058  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:28.076064  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:28.076131  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:28.114726  303486 cri.go:89] found id: ""
	I0920 19:08:28.114761  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.114772  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:28.114781  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:28.114846  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:28.150790  303486 cri.go:89] found id: ""
	I0920 19:08:28.150822  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.150832  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:28.150841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:28.150908  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:28.186576  303486 cri.go:89] found id: ""
	I0920 19:08:28.186606  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.186614  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:28.186626  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:28.186648  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:28.240939  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:28.240984  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:28.255267  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:28.255304  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:28.327773  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:28.327797  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:28.327809  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:28.418011  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:28.418055  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:30.962398  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:30.975385  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:30.975471  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:31.009898  303486 cri.go:89] found id: ""
	I0920 19:08:31.009952  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.009964  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:31.009973  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:31.010044  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:31.043639  303486 cri.go:89] found id: ""
	I0920 19:08:31.043670  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.043679  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:31.043689  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:31.043758  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:31.077709  303486 cri.go:89] found id: ""
	I0920 19:08:31.077745  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.077753  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:31.077759  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:31.077818  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:31.111117  303486 cri.go:89] found id: ""
	I0920 19:08:31.111150  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.111160  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:31.111168  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:31.111234  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:31.143888  303486 cri.go:89] found id: ""
	I0920 19:08:31.143921  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.143933  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:31.143942  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:31.144014  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:31.176694  303486 cri.go:89] found id: ""
	I0920 19:08:31.176729  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.176742  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:31.176751  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:31.176815  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:31.213794  303486 cri.go:89] found id: ""
	I0920 19:08:31.213832  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.213844  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:31.213854  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:31.213946  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:31.250160  303486 cri.go:89] found id: ""
	I0920 19:08:31.250219  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.250230  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:31.250244  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:31.250261  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:31.263748  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:31.263784  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:31.337719  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:31.337749  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:31.337762  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:31.420398  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:31.420446  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:31.459992  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:31.460030  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:34.014229  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:34.028129  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:34.028194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:34.060793  303486 cri.go:89] found id: ""
	I0920 19:08:34.060832  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.060850  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:34.060859  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:34.060919  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:34.094440  303486 cri.go:89] found id: ""
	I0920 19:08:34.094467  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.094475  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:34.094481  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:34.094544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:34.128824  303486 cri.go:89] found id: ""
	I0920 19:08:34.128861  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.128872  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:34.128881  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:34.128948  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:34.160861  303486 cri.go:89] found id: ""
	I0920 19:08:34.160894  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.160903  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:34.160911  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:34.160967  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:34.196897  303486 cri.go:89] found id: ""
	I0920 19:08:34.196933  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.196952  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:34.196958  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:34.197020  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:34.229083  303486 cri.go:89] found id: ""
	I0920 19:08:34.229115  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.229125  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:34.229134  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:34.229205  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:34.261877  303486 cri.go:89] found id: ""
	I0920 19:08:34.261922  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.261933  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:34.261941  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:34.262008  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:34.296145  303486 cri.go:89] found id: ""
	I0920 19:08:34.296177  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.296189  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:34.296199  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:34.296214  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:34.361598  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:34.361624  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:34.361641  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:34.441067  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:34.441110  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:34.483333  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:34.483362  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:34.538345  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:34.538388  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:37.053155  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:37.067157  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:37.067230  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:37.101432  303486 cri.go:89] found id: ""
	I0920 19:08:37.101466  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.101476  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:37.101485  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:37.101550  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:37.134375  303486 cri.go:89] found id: ""
	I0920 19:08:37.134408  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.134416  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:37.134423  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:37.134487  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:37.167049  303486 cri.go:89] found id: ""
	I0920 19:08:37.167087  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.167099  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:37.167107  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:37.167175  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:37.209358  303486 cri.go:89] found id: ""
	I0920 19:08:37.209387  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.209397  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:37.209405  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:37.209470  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:37.243227  303486 cri.go:89] found id: ""
	I0920 19:08:37.243261  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.243272  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:37.243281  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:37.243332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:37.276546  303486 cri.go:89] found id: ""
	I0920 19:08:37.276596  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.276607  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:37.276626  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:37.276688  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:37.311233  303486 cri.go:89] found id: ""
	I0920 19:08:37.311268  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.311279  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:37.311287  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:37.311352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:37.349970  303486 cri.go:89] found id: ""
	I0920 19:08:37.350003  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.350013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:37.350025  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:37.350041  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:37.399405  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:37.399445  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:37.423764  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:37.423800  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:37.498797  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:37.498826  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:37.498841  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:37.575521  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:37.575566  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:40.118650  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:40.131967  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:40.132051  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:40.165313  303486 cri.go:89] found id: ""
	I0920 19:08:40.165349  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.165358  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:40.165366  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:40.165439  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:40.197194  303486 cri.go:89] found id: ""
	I0920 19:08:40.197223  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.197232  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:40.197238  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:40.197289  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:40.236769  303486 cri.go:89] found id: ""
	I0920 19:08:40.236800  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.236810  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:40.236819  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:40.236888  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:40.271960  303486 cri.go:89] found id: ""
	I0920 19:08:40.271984  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.271992  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:40.271998  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:40.272049  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:40.307874  303486 cri.go:89] found id: ""
	I0920 19:08:40.307909  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.307917  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:40.307923  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:40.307982  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:40.342128  303486 cri.go:89] found id: ""
	I0920 19:08:40.342160  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.342168  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:40.342175  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:40.342233  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:40.381493  303486 cri.go:89] found id: ""
	I0920 19:08:40.381529  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.381542  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:40.381551  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:40.381617  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:40.415164  303486 cri.go:89] found id: ""
	I0920 19:08:40.415199  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.415211  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:40.415222  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:40.415238  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:40.488306  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:40.488330  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:40.488350  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:40.567193  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:40.567235  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:40.607256  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:40.607287  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:40.659504  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:40.659542  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:43.174043  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:43.188690  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:43.188790  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:43.227223  303486 cri.go:89] found id: ""
	I0920 19:08:43.227251  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.227259  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:43.227267  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:43.227356  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:43.260099  303486 cri.go:89] found id: ""
	I0920 19:08:43.260128  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.260137  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:43.260143  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:43.260195  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:43.297846  303486 cri.go:89] found id: ""
	I0920 19:08:43.297875  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.297886  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:43.297894  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:43.297980  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:43.334026  303486 cri.go:89] found id: ""
	I0920 19:08:43.334061  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.334070  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:43.334078  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:43.334147  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:43.367765  303486 cri.go:89] found id: ""
	I0920 19:08:43.367795  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.367806  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:43.367814  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:43.367890  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:43.402722  303486 cri.go:89] found id: ""
	I0920 19:08:43.402766  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.402778  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:43.402787  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:43.402852  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:43.439643  303486 cri.go:89] found id: ""
	I0920 19:08:43.439674  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.439682  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:43.439690  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:43.439742  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:43.475931  303486 cri.go:89] found id: ""
	I0920 19:08:43.475965  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.475976  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:43.475991  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:43.476006  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:43.545694  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:43.545725  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:43.545739  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:43.627493  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:43.627549  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:43.667758  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:43.667794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:43.721803  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:43.721851  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:46.237499  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:46.250854  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:46.250925  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:46.288918  303486 cri.go:89] found id: ""
	I0920 19:08:46.288950  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.288957  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:46.288964  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:46.289026  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:46.321113  303486 cri.go:89] found id: ""
	I0920 19:08:46.321149  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.321159  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:46.321168  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:46.321239  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:46.359606  303486 cri.go:89] found id: ""
	I0920 19:08:46.359643  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.359652  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:46.359659  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:46.359729  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:46.397059  303486 cri.go:89] found id: ""
	I0920 19:08:46.397089  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.397098  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:46.397104  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:46.397174  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:46.438224  303486 cri.go:89] found id: ""
	I0920 19:08:46.438261  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.438271  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:46.438279  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:46.438355  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:46.476933  303486 cri.go:89] found id: ""
	I0920 19:08:46.476963  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.476973  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:46.476981  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:46.477047  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:46.522115  303486 cri.go:89] found id: ""
	I0920 19:08:46.522150  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.522160  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:46.522167  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:46.522236  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:46.555508  303486 cri.go:89] found id: ""
	I0920 19:08:46.555541  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.555551  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:46.555565  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:46.555580  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:46.632314  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:46.632358  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:46.672381  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:46.672420  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:46.725777  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:46.725835  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:46.739924  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:46.739959  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:46.816667  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:49.317620  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:49.331792  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:49.331872  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:49.365417  303486 cri.go:89] found id: ""
	I0920 19:08:49.365457  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.365470  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:49.365479  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:49.365543  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:49.399422  303486 cri.go:89] found id: ""
	I0920 19:08:49.399455  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.399465  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:49.399474  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:49.399532  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:49.433040  303486 cri.go:89] found id: ""
	I0920 19:08:49.433069  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.433076  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:49.433082  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:49.433149  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:49.466865  303486 cri.go:89] found id: ""
	I0920 19:08:49.466897  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.466909  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:49.466917  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:49.466986  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:49.499542  303486 cri.go:89] found id: ""
	I0920 19:08:49.499574  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.499583  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:49.499589  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:49.499639  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:49.534310  303486 cri.go:89] found id: ""
	I0920 19:08:49.534338  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.534346  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:49.534353  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:49.534411  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:49.580271  303486 cri.go:89] found id: ""
	I0920 19:08:49.580297  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.580305  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:49.580312  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:49.580385  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:49.626519  303486 cri.go:89] found id: ""
	I0920 19:08:49.626554  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.626562  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:49.626572  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:49.626587  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:49.682923  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:49.682963  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:49.695859  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:49.695895  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:49.767626  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:49.767669  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:49.767697  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:49.849570  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:49.849614  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:52.387653  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:52.400693  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:52.400757  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:52.434320  303486 cri.go:89] found id: ""
	I0920 19:08:52.434358  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.434369  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:52.434381  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:52.434448  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:52.469167  303486 cri.go:89] found id: ""
	I0920 19:08:52.469202  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.469214  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:52.469222  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:52.469291  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:52.504241  303486 cri.go:89] found id: ""
	I0920 19:08:52.504287  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.504295  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:52.504304  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:52.504367  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:52.539573  303486 cri.go:89] found id: ""
	I0920 19:08:52.539604  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.539613  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:52.539619  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:52.539697  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:52.573794  303486 cri.go:89] found id: ""
	I0920 19:08:52.573821  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.573829  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:52.573834  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:52.573931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:52.607628  303486 cri.go:89] found id: ""
	I0920 19:08:52.607660  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.607670  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:52.607676  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:52.607738  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:52.639088  303486 cri.go:89] found id: ""
	I0920 19:08:52.639121  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.639132  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:52.639140  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:52.639204  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:52.673585  303486 cri.go:89] found id: ""
	I0920 19:08:52.673624  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.673636  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:52.673650  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:52.673667  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:52.726463  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:52.726504  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:52.739520  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:52.739553  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:52.820610  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:52.820638  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:52.820653  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:52.898567  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:52.898612  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:55.440875  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:55.454526  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:55.454602  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:55.490616  303486 cri.go:89] found id: ""
	I0920 19:08:55.490655  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.490664  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:55.490671  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:55.490735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:55.530256  303486 cri.go:89] found id: ""
	I0920 19:08:55.530287  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.530296  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:55.530304  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:55.530357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:55.565209  303486 cri.go:89] found id: ""
	I0920 19:08:55.565242  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.565253  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:55.565260  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:55.565319  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:55.599522  303486 cri.go:89] found id: ""
	I0920 19:08:55.599553  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.599563  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:55.599571  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:55.599634  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:55.634662  303486 cri.go:89] found id: ""
	I0920 19:08:55.634692  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.634700  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:55.634707  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:55.634759  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:55.670326  303486 cri.go:89] found id: ""
	I0920 19:08:55.670361  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.670372  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:55.670379  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:55.670434  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:55.702589  303486 cri.go:89] found id: ""
	I0920 19:08:55.702617  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.702625  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:55.702632  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:55.702694  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:55.737615  303486 cri.go:89] found id: ""
	I0920 19:08:55.737643  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.737653  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:55.737667  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:55.737682  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:55.816827  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:55.816873  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:55.855521  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:55.855550  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:55.905002  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:55.905047  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:55.918292  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:55.918324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:55.987445  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:58.488566  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:58.503898  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:58.504001  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:58.539089  303486 cri.go:89] found id: ""
	I0920 19:08:58.539117  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.539127  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:58.539135  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:58.539199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:58.576432  303486 cri.go:89] found id: ""
	I0920 19:08:58.576459  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.576467  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:58.576473  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:58.576542  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:58.613779  303486 cri.go:89] found id: ""
	I0920 19:08:58.613814  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.613825  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:58.613833  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:58.613932  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:58.648717  303486 cri.go:89] found id: ""
	I0920 19:08:58.648757  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.648768  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:58.648777  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:58.648845  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:58.681533  303486 cri.go:89] found id: ""
	I0920 19:08:58.681568  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.681585  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:58.681593  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:58.681647  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:58.714833  303486 cri.go:89] found id: ""
	I0920 19:08:58.714867  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.714877  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:58.714886  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:58.714951  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:58.755939  303486 cri.go:89] found id: ""
	I0920 19:08:58.755972  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.755980  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:58.755986  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:58.756037  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:58.793195  303486 cri.go:89] found id: ""
	I0920 19:08:58.793229  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.793240  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:58.793252  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:58.793267  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:58.807903  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:58.807939  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:58.873993  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:58.874022  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:58.874042  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:58.955201  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:58.955249  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:58.994230  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:58.994265  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:01.548403  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:01.561467  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:01.561541  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:01.595339  303486 cri.go:89] found id: ""
	I0920 19:09:01.595374  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.595382  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:01.595388  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:01.595463  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:01.631995  303486 cri.go:89] found id: ""
	I0920 19:09:01.632033  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.632043  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:01.632051  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:01.632119  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:01.667556  303486 cri.go:89] found id: ""
	I0920 19:09:01.667586  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.667596  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:01.667604  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:01.667669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:01.702678  303486 cri.go:89] found id: ""
	I0920 19:09:01.702708  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.702716  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:01.702723  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:01.702786  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:01.739953  303486 cri.go:89] found id: ""
	I0920 19:09:01.739987  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.739999  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:01.740008  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:01.740075  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:01.774188  303486 cri.go:89] found id: ""
	I0920 19:09:01.774222  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.774239  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:01.774249  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:01.774317  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:01.808885  303486 cri.go:89] found id: ""
	I0920 19:09:01.808916  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.808927  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:01.808935  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:01.808997  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:01.842357  303486 cri.go:89] found id: ""
	I0920 19:09:01.842394  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.842404  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:01.842417  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:01.842433  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:01.881750  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:01.881782  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:01.932190  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:01.932236  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:01.946305  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:01.946337  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:02.020099  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:02.020127  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:02.020141  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:04.601186  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:04.614292  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:04.614374  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:04.649579  303486 cri.go:89] found id: ""
	I0920 19:09:04.649611  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.649619  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:04.649625  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:04.649683  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:04.684039  303486 cri.go:89] found id: ""
	I0920 19:09:04.684076  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.684094  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:04.684108  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:04.684182  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:04.729130  303486 cri.go:89] found id: ""
	I0920 19:09:04.729166  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.729177  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:04.729186  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:04.729244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:04.762646  303486 cri.go:89] found id: ""
	I0920 19:09:04.762682  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.762690  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:04.762697  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:04.762761  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:04.797492  303486 cri.go:89] found id: ""
	I0920 19:09:04.797518  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.797527  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:04.797533  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:04.797588  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:04.832780  303486 cri.go:89] found id: ""
	I0920 19:09:04.832813  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.832823  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:04.832831  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:04.832893  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:04.868489  303486 cri.go:89] found id: ""
	I0920 19:09:04.868526  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.868537  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:04.868546  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:04.868613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:04.901115  303486 cri.go:89] found id: ""
	I0920 19:09:04.901156  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.901164  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:04.901174  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:04.901186  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:04.952435  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:04.952482  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:04.966450  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:04.966481  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:05.035951  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:05.035977  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:05.035991  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:05.120961  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:05.121016  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:07.659497  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:07.672989  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:07.673062  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:07.708200  303486 cri.go:89] found id: ""
	I0920 19:09:07.708236  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.708247  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:07.708256  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:07.708320  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:07.742116  303486 cri.go:89] found id: ""
	I0920 19:09:07.742156  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.742166  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:07.742175  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:07.742231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:07.774369  303486 cri.go:89] found id: ""
	I0920 19:09:07.774401  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.774410  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:07.774419  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:07.774485  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:07.811727  303486 cri.go:89] found id: ""
	I0920 19:09:07.811756  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.811763  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:07.811769  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:07.811825  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:07.849613  303486 cri.go:89] found id: ""
	I0920 19:09:07.849646  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.849655  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:07.849661  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:07.849715  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:07.884643  303486 cri.go:89] found id: ""
	I0920 19:09:07.884679  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.884690  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:07.884698  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:07.884770  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:07.920240  303486 cri.go:89] found id: ""
	I0920 19:09:07.920272  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.920283  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:07.920292  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:07.920371  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:07.954729  303486 cri.go:89] found id: ""
	I0920 19:09:07.954768  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.954780  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:07.954792  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:07.954808  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:08.008679  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:08.008732  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:08.023637  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:08.023673  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:08.097298  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:08.097325  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:08.097340  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:08.173404  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:08.173444  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:10.718224  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:10.732520  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:10.732593  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:10.766764  303486 cri.go:89] found id: ""
	I0920 19:09:10.766800  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.766811  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:10.766821  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:10.766887  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:10.800039  303486 cri.go:89] found id: ""
	I0920 19:09:10.800077  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.800087  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:10.800095  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:10.800157  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:10.833931  303486 cri.go:89] found id: ""
	I0920 19:09:10.833969  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.833979  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:10.833985  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:10.834057  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:10.867714  303486 cri.go:89] found id: ""
	I0920 19:09:10.867752  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.867763  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:10.867771  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:10.867840  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:10.903026  303486 cri.go:89] found id: ""
	I0920 19:09:10.903060  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.903068  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:10.903075  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:10.903131  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:10.936968  303486 cri.go:89] found id: ""
	I0920 19:09:10.937002  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.937013  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:10.937021  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:10.937089  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:10.973055  303486 cri.go:89] found id: ""
	I0920 19:09:10.973079  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.973087  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:10.973093  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:10.973145  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:11.010283  303486 cri.go:89] found id: ""
	I0920 19:09:11.010310  303486 logs.go:276] 0 containers: []
	W0920 19:09:11.010321  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:11.010333  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:11.010352  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:11.025202  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:11.025239  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:11.104268  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:11.104295  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:11.104312  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:11.182281  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:11.182326  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:11.219296  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:11.219335  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:13.767833  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:13.780805  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:13.780890  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:13.822288  303486 cri.go:89] found id: ""
	I0920 19:09:13.822317  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.822327  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:13.822334  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:13.822388  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:13.862068  303486 cri.go:89] found id: ""
	I0920 19:09:13.862098  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.862106  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:13.862112  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:13.862163  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:13.898497  303486 cri.go:89] found id: ""
	I0920 19:09:13.898529  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.898540  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:13.898550  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:13.898618  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:13.935994  303486 cri.go:89] found id: ""
	I0920 19:09:13.936022  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.936030  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:13.936038  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:13.936105  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:13.973764  303486 cri.go:89] found id: ""
	I0920 19:09:13.973801  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.973812  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:13.973820  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:13.973898  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:14.009443  303486 cri.go:89] found id: ""
	I0920 19:09:14.009482  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.009494  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:14.009502  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:14.009577  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:14.045593  303486 cri.go:89] found id: ""
	I0920 19:09:14.045629  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.045639  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:14.045648  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:14.045714  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:14.086273  303486 cri.go:89] found id: ""
	I0920 19:09:14.086310  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.086319  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:14.086330  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:14.086343  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:14.140730  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:14.140772  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:14.154198  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:14.154232  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:14.224716  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:14.224739  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:14.224754  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:14.302625  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:14.302665  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:16.840816  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:16.854905  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:16.855002  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:16.892994  303486 cri.go:89] found id: ""
	I0920 19:09:16.893028  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.893038  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:16.893045  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:16.893103  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:16.931265  303486 cri.go:89] found id: ""
	I0920 19:09:16.931293  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.931307  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:16.931313  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:16.931364  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:16.970085  303486 cri.go:89] found id: ""
	I0920 19:09:16.970119  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.970129  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:16.970138  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:16.970189  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:17.003163  303486 cri.go:89] found id: ""
	I0920 19:09:17.003194  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.003206  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:17.003214  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:17.003282  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:17.040577  303486 cri.go:89] found id: ""
	I0920 19:09:17.040618  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.040633  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:17.040640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:17.040706  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:17.073946  303486 cri.go:89] found id: ""
	I0920 19:09:17.073986  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.073995  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:17.074006  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:17.074066  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:17.111569  303486 cri.go:89] found id: ""
	I0920 19:09:17.111636  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.111648  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:17.111664  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:17.111730  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:17.148005  303486 cri.go:89] found id: ""
	I0920 19:09:17.148034  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.148044  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:17.148056  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:17.148072  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:17.222281  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:17.222306  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:17.222324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:17.297577  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:17.297619  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:17.334709  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:17.334740  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:17.386279  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:17.386320  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:19.901017  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:19.914489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:19.914571  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:19.955023  303486 cri.go:89] found id: ""
	I0920 19:09:19.955051  303486 logs.go:276] 0 containers: []
	W0920 19:09:19.955060  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:19.955067  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:19.955125  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:19.995536  303486 cri.go:89] found id: ""
	I0920 19:09:19.995575  303486 logs.go:276] 0 containers: []
	W0920 19:09:19.995585  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:19.995594  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:19.995650  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:20.031153  303486 cri.go:89] found id: ""
	I0920 19:09:20.031181  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.031190  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:20.031198  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:20.031266  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:20.064145  303486 cri.go:89] found id: ""
	I0920 19:09:20.064174  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.064190  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:20.064199  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:20.064256  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:20.098399  303486 cri.go:89] found id: ""
	I0920 19:09:20.098429  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.098440  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:20.098449  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:20.098505  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:20.138805  303486 cri.go:89] found id: ""
	I0920 19:09:20.138833  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.138843  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:20.138852  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:20.138914  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:20.183291  303486 cri.go:89] found id: ""
	I0920 19:09:20.183322  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.183333  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:20.183342  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:20.183406  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:20.220344  303486 cri.go:89] found id: ""
	I0920 19:09:20.220378  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.220396  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:20.220409  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:20.220426  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:20.271043  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:20.271086  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:20.286724  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:20.286754  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:20.358233  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:20.358273  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:20.358291  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:20.439511  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:20.439568  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:22.982570  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:22.995384  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:22.995475  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:23.029031  303486 cri.go:89] found id: ""
	I0920 19:09:23.029069  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.029081  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:23.029091  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:23.029166  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:23.063291  303486 cri.go:89] found id: ""
	I0920 19:09:23.063325  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.063336  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:23.063343  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:23.063413  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:23.097494  303486 cri.go:89] found id: ""
	I0920 19:09:23.097525  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.097536  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:23.097545  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:23.097610  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:23.132169  303486 cri.go:89] found id: ""
	I0920 19:09:23.132197  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.132204  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:23.132211  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:23.132276  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:23.173651  303486 cri.go:89] found id: ""
	I0920 19:09:23.173682  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.173692  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:23.173700  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:23.173763  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:23.206098  303486 cri.go:89] found id: ""
	I0920 19:09:23.206135  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.206146  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:23.206155  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:23.206216  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:23.245422  303486 cri.go:89] found id: ""
	I0920 19:09:23.245466  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.245479  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:23.245489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:23.245569  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:23.280326  303486 cri.go:89] found id: ""
	I0920 19:09:23.280357  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.280365  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:23.280376  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:23.280390  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:23.330986  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:23.331034  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:23.344751  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:23.344788  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:23.420213  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:23.420239  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:23.420255  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:23.500449  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:23.500491  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:26.040050  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:26.056377  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:26.056463  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:26.094122  303486 cri.go:89] found id: ""
	I0920 19:09:26.094160  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.094170  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:26.094179  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:26.094246  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:26.129383  303486 cri.go:89] found id: ""
	I0920 19:09:26.129408  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.129415  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:26.129422  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:26.129472  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:26.163579  303486 cri.go:89] found id: ""
	I0920 19:09:26.163611  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.163621  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:26.163630  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:26.163699  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:26.208026  303486 cri.go:89] found id: ""
	I0920 19:09:26.208057  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.208065  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:26.208071  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:26.208138  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:26.245375  303486 cri.go:89] found id: ""
	I0920 19:09:26.245409  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.245421  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:26.245438  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:26.245500  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:26.280283  303486 cri.go:89] found id: ""
	I0920 19:09:26.280315  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.280326  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:26.280336  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:26.280397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:26.314621  303486 cri.go:89] found id: ""
	I0920 19:09:26.314657  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.314670  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:26.314679  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:26.314773  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:26.347667  303486 cri.go:89] found id: ""
	I0920 19:09:26.347694  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.347701  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:26.347711  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:26.347722  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:26.397221  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:26.397259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:26.411126  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:26.411157  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:26.479631  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:26.479657  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:26.479686  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:26.555439  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:26.555477  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:29.096877  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:29.110081  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:29.110170  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:29.152570  303486 cri.go:89] found id: ""
	I0920 19:09:29.152598  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.152608  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:29.152616  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:29.152689  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:29.188596  303486 cri.go:89] found id: ""
	I0920 19:09:29.188627  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.188638  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:29.188645  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:29.188713  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:29.228789  303486 cri.go:89] found id: ""
	I0920 19:09:29.228831  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.228841  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:29.228850  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:29.228913  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:29.260013  303486 cri.go:89] found id: ""
	I0920 19:09:29.260040  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.260048  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:29.260054  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:29.260105  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:29.293373  303486 cri.go:89] found id: ""
	I0920 19:09:29.293401  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.293411  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:29.293418  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:29.293487  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:29.325860  303486 cri.go:89] found id: ""
	I0920 19:09:29.325898  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.325925  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:29.325935  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:29.326027  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:29.358873  303486 cri.go:89] found id: ""
	I0920 19:09:29.358909  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.358921  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:29.358930  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:29.358994  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:29.392029  303486 cri.go:89] found id: ""
	I0920 19:09:29.392057  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.392067  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:29.392080  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:29.392095  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:29.467460  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:29.467504  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:29.508258  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:29.508298  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:29.559238  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:29.559274  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:29.574233  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:29.574264  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:29.649318  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:32.150539  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:32.168442  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:32.168527  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:32.210069  303486 cri.go:89] found id: ""
	I0920 19:09:32.210103  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.210120  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:32.210129  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:32.210191  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:32.243468  303486 cri.go:89] found id: ""
	I0920 19:09:32.243501  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.243511  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:32.243519  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:32.243586  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:32.275958  303486 cri.go:89] found id: ""
	I0920 19:09:32.275988  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.275996  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:32.276003  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:32.276056  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:32.312560  303486 cri.go:89] found id: ""
	I0920 19:09:32.312598  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.312609  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:32.312620  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:32.312695  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:32.347157  303486 cri.go:89] found id: ""
	I0920 19:09:32.347185  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.347193  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:32.347200  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:32.347264  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:32.382787  303486 cri.go:89] found id: ""
	I0920 19:09:32.382820  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.382832  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:32.382841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:32.382898  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:32.416182  303486 cri.go:89] found id: ""
	I0920 19:09:32.416216  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.416226  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:32.416234  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:32.416297  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:32.448863  303486 cri.go:89] found id: ""
	I0920 19:09:32.448895  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.448906  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:32.448919  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:32.448934  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:32.501882  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:32.501934  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:32.517984  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:32.518014  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:32.588517  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:32.588547  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:32.588560  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:32.671869  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:32.671921  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:35.211780  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:35.225476  303486 kubeadm.go:597] duration metric: took 4m2.827297435s to restartPrimaryControlPlane
	W0920 19:09:35.225582  303486 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:09:35.225618  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:09:35.686956  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:35.701803  303486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:09:35.712572  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:09:35.722867  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:09:35.722894  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:09:35.722948  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:09:35.732295  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:09:35.732358  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:09:35.741569  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:09:35.750515  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:09:35.750577  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:09:35.760469  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:09:35.770207  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:09:35.770284  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:09:35.780121  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:09:35.789887  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:09:35.789974  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:09:35.800914  303486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:09:35.871635  303486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:09:35.871691  303486 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:09:36.021411  303486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:09:36.021565  303486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:09:36.021773  303486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:09:36.217540  303486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:09:36.219542  303486 out.go:235]   - Generating certificates and keys ...
	I0920 19:09:36.219684  303486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:09:36.219769  303486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:09:36.219892  303486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:09:36.219973  303486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:09:36.220090  303486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:09:36.220181  303486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:09:36.220302  303486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:09:36.220414  303486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:09:36.220530  303486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:09:36.220626  303486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:09:36.220691  303486 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:09:36.220767  303486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:09:36.377012  303486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:09:36.706154  303486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:09:36.907341  303486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:09:37.091990  303486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:09:37.122813  303486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:09:37.124422  303486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:09:37.124531  303486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:09:37.277461  303486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:09:37.279714  303486 out.go:235]   - Booting up control plane ...
	I0920 19:09:37.279861  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:09:37.288448  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:09:37.289724  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:09:37.290822  303486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:09:37.294106  303486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:10:17.296411  303486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:10:17.296525  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:17.296765  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:22.297630  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:22.297923  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:32.298239  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:32.298525  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:52.299257  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:52.299561  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:11:32.301334  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:11:32.302020  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:11:32.302048  303486 kubeadm.go:310] 
	I0920 19:11:32.302147  303486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:11:32.302252  303486 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:11:32.302279  303486 kubeadm.go:310] 
	I0920 19:11:32.302366  303486 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:11:32.302453  303486 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:11:32.302713  303486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:11:32.302731  303486 kubeadm.go:310] 
	I0920 19:11:32.303023  303486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:11:32.303099  303486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:11:32.303200  303486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:11:32.303232  303486 kubeadm.go:310] 
	I0920 19:11:32.303438  303486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:11:32.303669  303486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:11:32.303699  303486 kubeadm.go:310] 
	I0920 19:11:32.303965  303486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:11:32.304199  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:11:32.304410  303486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:11:32.304577  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:11:32.304624  303486 kubeadm.go:310] 
	I0920 19:11:32.305105  303486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:11:32.305465  303486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:11:32.305655  303486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 19:11:32.305713  303486 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 19:11:32.305758  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:11:32.760742  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:11:32.775675  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:11:32.785785  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:11:32.785806  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:11:32.785854  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:11:32.795133  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:11:32.795210  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:11:32.805681  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:11:32.815299  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:11:32.815362  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:11:32.827215  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:11:32.836597  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:11:32.836682  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:11:32.846621  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:11:32.855610  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:11:32.855675  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:11:32.866824  303486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:11:33.103745  303486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:13:29.101212  303486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:13:29.101347  303486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 19:13:29.103031  303486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:13:29.103142  303486 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:13:29.103216  303486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:13:29.103318  303486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:13:29.103437  303486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:13:29.103507  303486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:13:29.105521  303486 out.go:235]   - Generating certificates and keys ...
	I0920 19:13:29.105622  303486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:13:29.105704  303486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:13:29.105820  303486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:13:29.105955  303486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:13:29.106058  303486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:13:29.106132  303486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:13:29.106219  303486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:13:29.106318  303486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:13:29.106430  303486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:13:29.106548  303486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:13:29.106611  303486 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:13:29.106699  303486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:13:29.106766  303486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:13:29.106844  303486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:13:29.106935  303486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:13:29.107011  303486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:13:29.107117  303486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:13:29.107223  303486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:13:29.107289  303486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:13:29.107376  303486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:13:29.108804  303486 out.go:235]   - Booting up control plane ...
	I0920 19:13:29.108952  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:13:29.109021  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:13:29.109082  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:13:29.109166  303486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:13:29.109313  303486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:13:29.109359  303486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:13:29.109462  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.109630  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.109699  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.109878  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.109966  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110133  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110213  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110382  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110441  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110606  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110616  303486 kubeadm.go:310] 
	I0920 19:13:29.110661  303486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:13:29.110699  303486 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:13:29.110706  303486 kubeadm.go:310] 
	I0920 19:13:29.110739  303486 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:13:29.110769  303486 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:13:29.110866  303486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:13:29.110875  303486 kubeadm.go:310] 
	I0920 19:13:29.110969  303486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:13:29.111003  303486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:13:29.111031  303486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:13:29.111037  303486 kubeadm.go:310] 
	I0920 19:13:29.111141  303486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:13:29.111224  303486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:13:29.111231  303486 kubeadm.go:310] 
	I0920 19:13:29.111327  303486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:13:29.111407  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:13:29.111481  303486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:13:29.111542  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:13:29.111610  303486 kubeadm.go:394] duration metric: took 7m56.768319159s to StartCluster
	I0920 19:13:29.111640  303486 kubeadm.go:310] 
	I0920 19:13:29.111664  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:13:29.111734  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:13:29.157817  303486 cri.go:89] found id: ""
	I0920 19:13:29.157849  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.157859  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:13:29.157867  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:13:29.157950  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:13:29.192130  303486 cri.go:89] found id: ""
	I0920 19:13:29.192164  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.192179  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:13:29.192187  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:13:29.192243  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:13:29.227594  303486 cri.go:89] found id: ""
	I0920 19:13:29.227631  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.227642  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:13:29.227651  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:13:29.227724  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:13:29.261948  303486 cri.go:89] found id: ""
	I0920 19:13:29.261981  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.261996  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:13:29.262004  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:13:29.262072  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:13:29.295148  303486 cri.go:89] found id: ""
	I0920 19:13:29.295181  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.295191  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:13:29.295200  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:13:29.295270  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:13:29.328094  303486 cri.go:89] found id: ""
	I0920 19:13:29.328127  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.328135  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:13:29.328142  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:13:29.328194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:13:29.368830  303486 cri.go:89] found id: ""
	I0920 19:13:29.368870  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.368878  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:13:29.368885  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:13:29.368947  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:13:29.420051  303486 cri.go:89] found id: ""
	I0920 19:13:29.420081  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.420091  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:13:29.420106  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:13:29.420123  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:13:29.498322  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:13:29.498350  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:13:29.498364  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:13:29.601796  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:13:29.601842  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:13:29.644325  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:13:29.644368  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:13:29.692691  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:13:29.692736  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0920 19:13:29.707508  303486 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 19:13:29.707577  303486 out.go:270] * 
	* 
	W0920 19:13:29.707646  303486 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:13:29.707664  303486 out.go:270] * 
	* 
	W0920 19:13:29.708560  303486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 19:13:29.711313  303486 out.go:201] 
	W0920 19:13:29.712520  303486 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:13:29.712553  303486 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 19:13:29.712576  303486 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 19:13:29.713832  303486 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-425599 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599: exit status 2 (253.618855ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-425599 logs -n 25
E0920 19:13:30.942084  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-425599 logs -n 25: (1.756231403s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-793540 sudo cat                             | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo find                            | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo crio                            | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-793540                                      | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-896665 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | disable-driver-mounts-896665                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:57 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-037711             | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-339897            | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-612312  | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-037711                  | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC | 20 Sep 24 19:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-339897                 | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-425599        | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612312       | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-425599             | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:01:28
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:01:28.948776  303486 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:01:28.948894  303486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:01:28.948900  303486 out.go:358] Setting ErrFile to fd 2...
	I0920 19:01:28.948906  303486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:01:28.949090  303486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 19:01:28.949637  303486 out.go:352] Setting JSON to false
	I0920 19:01:28.950705  303486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9832,"bootTime":1726849057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:01:28.950809  303486 start.go:139] virtualization: kvm guest
	I0920 19:01:28.953226  303486 out.go:177] * [old-k8s-version-425599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:01:28.955013  303486 notify.go:220] Checking for updates...
	I0920 19:01:28.955065  303486 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 19:01:28.956932  303486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:01:28.959076  303486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:01:28.961116  303486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 19:01:28.963396  303486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:01:28.965428  303486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:01:28.967688  303486 config.go:182] Loaded profile config "old-k8s-version-425599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:01:28.968112  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:01:28.968175  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:01:28.984002  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
	I0920 19:01:28.984552  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:01:28.985260  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:01:28.985291  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:01:28.985715  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:01:28.985972  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:01:28.988070  303486 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 19:01:28.989565  303486 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:01:28.990007  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:01:28.990079  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:01:29.006020  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34059
	I0920 19:01:29.006492  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:01:29.007046  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:01:29.007078  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:01:29.007441  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:01:29.007706  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:01:29.049785  303486 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 19:01:29.051185  303486 start.go:297] selected driver: kvm2
	I0920 19:01:29.051206  303486 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:01:29.051323  303486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:01:29.052030  303486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:01:29.052131  303486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:01:29.068826  303486 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:01:29.069232  303486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:01:29.069262  303486 cni.go:84] Creating CNI manager for ""
	I0920 19:01:29.069297  303486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:01:29.069333  303486 start.go:340] cluster config:
	{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:01:29.069439  303486 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:01:29.071617  303486 out.go:177] * Starting "old-k8s-version-425599" primary control-plane node in "old-k8s-version-425599" cluster
	I0920 19:01:27.086248  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:29.073133  303486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:01:29.073174  303486 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 19:01:29.073182  303486 cache.go:56] Caching tarball of preloaded images
	I0920 19:01:29.073269  303486 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:01:29.073285  303486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 19:01:29.073388  303486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 19:01:29.073573  303486 start.go:360] acquireMachinesLock for old-k8s-version-425599: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:01:33.166258  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:36.238261  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:42.318195  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:45.390223  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:51.470272  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:54.542277  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:00.622232  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:03.694275  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:09.774241  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:12.846248  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:18.926213  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:21.998195  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:28.078192  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:31.150239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:37.230160  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:40.302224  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:46.382225  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:49.454205  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:55.534186  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:58.606232  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:04.686254  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:07.758234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:13.838239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:16.910321  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:22.990234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:26.062339  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:32.142210  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:35.214256  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:41.294234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:44.366288  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:50.446215  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:53.518266  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:59.598190  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:02.670240  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:08.750179  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:11.822239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:17.902176  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:20.974235  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:23.977804  302869 start.go:364] duration metric: took 4m19.519175605s to acquireMachinesLock for "embed-certs-339897"
	I0920 19:04:23.977868  302869 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:04:23.977876  302869 fix.go:54] fixHost starting: 
	I0920 19:04:23.978233  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:04:23.978277  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:04:23.993804  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0920 19:04:23.994326  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:04:23.994906  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:04:23.994925  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:04:23.995219  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:04:23.995413  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:23.995575  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:04:23.997417  302869 fix.go:112] recreateIfNeeded on embed-certs-339897: state=Stopped err=<nil>
	I0920 19:04:23.997439  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	W0920 19:04:23.997636  302869 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:04:24.001021  302869 out.go:177] * Restarting existing kvm2 VM for "embed-certs-339897" ...
	I0920 19:04:24.002636  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Start
	I0920 19:04:24.002842  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring networks are active...
	I0920 19:04:24.003916  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring network default is active
	I0920 19:04:24.004282  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring network mk-embed-certs-339897 is active
	I0920 19:04:24.004647  302869 main.go:141] libmachine: (embed-certs-339897) Getting domain xml...
	I0920 19:04:24.005446  302869 main.go:141] libmachine: (embed-certs-339897) Creating domain...
	I0920 19:04:23.975096  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:04:23.975155  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:04:23.975457  302538 buildroot.go:166] provisioning hostname "no-preload-037711"
	I0920 19:04:23.975485  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:04:23.975712  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:04:23.977607  302538 machine.go:96] duration metric: took 4m37.412034117s to provisionDockerMachine
	I0920 19:04:23.977703  302538 fix.go:56] duration metric: took 4m37.437032108s for fixHost
	I0920 19:04:23.977718  302538 start.go:83] releasing machines lock for "no-preload-037711", held for 4m37.437081737s
	W0920 19:04:23.977745  302538 start.go:714] error starting host: provision: host is not running
	W0920 19:04:23.977850  302538 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 19:04:23.977859  302538 start.go:729] Will try again in 5 seconds ...
	I0920 19:04:25.258221  302869 main.go:141] libmachine: (embed-certs-339897) Waiting to get IP...
	I0920 19:04:25.259119  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.259493  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.259584  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.259481  304091 retry.go:31] will retry after 212.462393ms: waiting for machine to come up
	I0920 19:04:25.474057  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.474524  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.474564  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.474441  304091 retry.go:31] will retry after 306.01691ms: waiting for machine to come up
	I0920 19:04:25.782264  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.782729  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.782753  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.782706  304091 retry.go:31] will retry after 416.637796ms: waiting for machine to come up
	I0920 19:04:26.201336  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:26.201704  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:26.201738  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:26.201645  304091 retry.go:31] will retry after 583.373452ms: waiting for machine to come up
	I0920 19:04:26.786448  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:26.786854  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:26.786876  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:26.786807  304091 retry.go:31] will retry after 760.706965ms: waiting for machine to come up
	I0920 19:04:27.548786  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:27.549126  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:27.549149  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:27.549088  304091 retry.go:31] will retry after 615.829194ms: waiting for machine to come up
	I0920 19:04:28.167061  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:28.167601  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:28.167647  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:28.167419  304091 retry.go:31] will retry after 786.700064ms: waiting for machine to come up
	I0920 19:04:28.955294  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:28.955658  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:28.955685  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:28.955611  304091 retry.go:31] will retry after 1.309567829s: waiting for machine to come up
	I0920 19:04:28.979506  302538 start.go:360] acquireMachinesLock for no-preload-037711: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:04:30.267104  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:30.267645  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:30.267676  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:30.267583  304091 retry.go:31] will retry after 1.153396834s: waiting for machine to come up
	I0920 19:04:31.423030  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:31.423604  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:31.423629  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:31.423542  304091 retry.go:31] will retry after 1.858288741s: waiting for machine to come up
	I0920 19:04:33.284886  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:33.285381  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:33.285419  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:33.285334  304091 retry.go:31] will retry after 2.343802005s: waiting for machine to come up
	I0920 19:04:35.630962  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:35.631408  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:35.631439  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:35.631359  304091 retry.go:31] will retry after 2.42254126s: waiting for machine to come up
	I0920 19:04:38.055128  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:38.055796  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:38.055854  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:38.055732  304091 retry.go:31] will retry after 3.877296828s: waiting for machine to come up
	I0920 19:04:43.362725  303063 start.go:364] duration metric: took 4m20.211671699s to acquireMachinesLock for "default-k8s-diff-port-612312"
	I0920 19:04:43.362794  303063 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:04:43.362810  303063 fix.go:54] fixHost starting: 
	I0920 19:04:43.363257  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:04:43.363315  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:04:43.380877  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34751
	I0920 19:04:43.381399  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:04:43.381894  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:04:43.381933  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:04:43.382364  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:04:43.382596  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:04:43.382746  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:04:43.384351  303063 fix.go:112] recreateIfNeeded on default-k8s-diff-port-612312: state=Stopped err=<nil>
	I0920 19:04:43.384379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	W0920 19:04:43.384540  303063 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:04:43.386969  303063 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-612312" ...
	I0920 19:04:41.936215  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.936789  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has current primary IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.936811  302869 main.go:141] libmachine: (embed-certs-339897) Found IP for machine: 192.168.72.72
	I0920 19:04:41.936823  302869 main.go:141] libmachine: (embed-certs-339897) Reserving static IP address...
	I0920 19:04:41.937386  302869 main.go:141] libmachine: (embed-certs-339897) Reserved static IP address: 192.168.72.72
	I0920 19:04:41.937412  302869 main.go:141] libmachine: (embed-certs-339897) Waiting for SSH to be available...
	I0920 19:04:41.937435  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "embed-certs-339897", mac: "52:54:00:dc:b1:41", ip: "192.168.72.72"} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:41.937466  302869 main.go:141] libmachine: (embed-certs-339897) DBG | skip adding static IP to network mk-embed-certs-339897 - found existing host DHCP lease matching {name: "embed-certs-339897", mac: "52:54:00:dc:b1:41", ip: "192.168.72.72"}
	I0920 19:04:41.937481  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Getting to WaitForSSH function...
	I0920 19:04:41.939673  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.940065  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:41.940089  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.940196  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Using SSH client type: external
	I0920 19:04:41.940223  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa (-rw-------)
	I0920 19:04:41.940261  302869 main.go:141] libmachine: (embed-certs-339897) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:04:41.940274  302869 main.go:141] libmachine: (embed-certs-339897) DBG | About to run SSH command:
	I0920 19:04:41.940285  302869 main.go:141] libmachine: (embed-certs-339897) DBG | exit 0
	I0920 19:04:42.065967  302869 main.go:141] libmachine: (embed-certs-339897) DBG | SSH cmd err, output: <nil>: 
	I0920 19:04:42.066357  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetConfigRaw
	I0920 19:04:42.067004  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:42.069586  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.069937  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.069968  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.070208  302869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/config.json ...
	I0920 19:04:42.070452  302869 machine.go:93] provisionDockerMachine start ...
	I0920 19:04:42.070478  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:42.070687  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.072878  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.073340  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.073375  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.073501  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.073701  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.073899  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.074080  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.074254  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.074504  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.074523  302869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:04:42.182250  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:04:42.182287  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.182543  302869 buildroot.go:166] provisioning hostname "embed-certs-339897"
	I0920 19:04:42.182570  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.182818  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.185497  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.185850  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.185886  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.186069  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.186274  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.186421  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.186568  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.186770  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.186986  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.187006  302869 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-339897 && echo "embed-certs-339897" | sudo tee /etc/hostname
	I0920 19:04:42.307656  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-339897
	
	I0920 19:04:42.307700  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.310572  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.310943  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.310970  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.311184  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.311382  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.311534  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.311663  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.311810  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.311984  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.312003  302869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-339897' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-339897/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-339897' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:04:42.426403  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:04:42.426440  302869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:04:42.426493  302869 buildroot.go:174] setting up certificates
	I0920 19:04:42.426502  302869 provision.go:84] configureAuth start
	I0920 19:04:42.426513  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.426822  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:42.429708  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.430134  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.430170  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.430328  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.432799  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.433222  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.433251  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.433383  302869 provision.go:143] copyHostCerts
	I0920 19:04:42.433466  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:04:42.433487  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:04:42.433549  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:04:42.433644  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:04:42.433652  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:04:42.433678  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:04:42.433735  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:04:42.433742  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:04:42.433762  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:04:42.433811  302869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.embed-certs-339897 san=[127.0.0.1 192.168.72.72 embed-certs-339897 localhost minikube]
	I0920 19:04:42.745528  302869 provision.go:177] copyRemoteCerts
	I0920 19:04:42.745599  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:04:42.745633  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.748247  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.748587  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.748619  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.748811  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.749014  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.749201  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.749334  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:42.831927  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:04:42.855674  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:04:42.879114  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 19:04:42.902982  302869 provision.go:87] duration metric: took 476.462339ms to configureAuth
	I0920 19:04:42.903019  302869 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:04:42.903236  302869 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:04:42.903321  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.906208  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.906580  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.906613  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.906810  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.907006  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.907136  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.907263  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.907427  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.907601  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.907616  302869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:04:43.127800  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:04:43.127847  302869 machine.go:96] duration metric: took 1.057372659s to provisionDockerMachine
	I0920 19:04:43.127864  302869 start.go:293] postStartSetup for "embed-certs-339897" (driver="kvm2")
	I0920 19:04:43.127890  302869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:04:43.127917  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.128263  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:04:43.128298  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.131648  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.132138  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.132173  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.132340  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.132560  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.132739  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.132896  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.216646  302869 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:04:43.220513  302869 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:04:43.220548  302869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:04:43.220629  302869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:04:43.220734  302869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:04:43.220862  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:04:43.230506  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:04:43.252894  302869 start.go:296] duration metric: took 125.003067ms for postStartSetup
	I0920 19:04:43.252943  302869 fix.go:56] duration metric: took 19.275066559s for fixHost
	I0920 19:04:43.252971  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.255999  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.256378  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.256406  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.256634  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.256858  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.257047  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.257214  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.257382  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:43.257546  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:43.257556  302869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:04:43.362516  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859083.339291891
	
	I0920 19:04:43.362545  302869 fix.go:216] guest clock: 1726859083.339291891
	I0920 19:04:43.362553  302869 fix.go:229] Guest: 2024-09-20 19:04:43.339291891 +0000 UTC Remote: 2024-09-20 19:04:43.25294824 +0000 UTC m=+278.942139838 (delta=86.343651ms)
	I0920 19:04:43.362585  302869 fix.go:200] guest clock delta is within tolerance: 86.343651ms
	I0920 19:04:43.362591  302869 start.go:83] releasing machines lock for "embed-certs-339897", held for 19.38474105s
	I0920 19:04:43.362620  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.362970  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:43.365988  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.366359  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.366380  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.366610  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367130  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367326  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367423  302869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:04:43.367469  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.367602  302869 ssh_runner.go:195] Run: cat /version.json
	I0920 19:04:43.367628  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.370233  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370594  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.370624  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370649  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370804  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.370998  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.371169  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.371191  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.371249  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.371406  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.371470  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.371566  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.371721  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.371885  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.490023  302869 ssh_runner.go:195] Run: systemctl --version
	I0920 19:04:43.496615  302869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:04:43.643493  302869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:04:43.649492  302869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:04:43.649560  302869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:04:43.665423  302869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:04:43.665460  302869 start.go:495] detecting cgroup driver to use...
	I0920 19:04:43.665530  302869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:04:43.681288  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:04:43.695161  302869 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:04:43.695218  302869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:04:43.708772  302869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:04:43.722803  302869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:04:43.834054  302869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:04:43.966014  302869 docker.go:233] disabling docker service ...
	I0920 19:04:43.966102  302869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:04:43.982324  302869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:04:43.995351  302869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:04:44.135635  302869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:04:44.262661  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:04:44.277377  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:04:44.299889  302869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:04:44.299965  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.312434  302869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:04:44.312534  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.323052  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.333504  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.343704  302869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:04:44.354386  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.364308  302869 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.383581  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.395013  302869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:04:44.405227  302869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:04:44.405279  302869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:04:44.418685  302869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:04:44.431323  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:04:44.558582  302869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:04:44.644003  302869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:04:44.644091  302869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:04:44.649434  302869 start.go:563] Will wait 60s for crictl version
	I0920 19:04:44.649498  302869 ssh_runner.go:195] Run: which crictl
	I0920 19:04:44.653334  302869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:04:44.695896  302869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:04:44.696004  302869 ssh_runner.go:195] Run: crio --version
	I0920 19:04:44.726148  302869 ssh_runner.go:195] Run: crio --version
	I0920 19:04:44.757340  302869 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:04:43.388378  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Start
	I0920 19:04:43.388603  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring networks are active...
	I0920 19:04:43.389387  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring network default is active
	I0920 19:04:43.389863  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring network mk-default-k8s-diff-port-612312 is active
	I0920 19:04:43.390364  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Getting domain xml...
	I0920 19:04:43.391121  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Creating domain...
	I0920 19:04:44.718004  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting to get IP...
	I0920 19:04:44.718885  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.719317  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.719413  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:44.719288  304227 retry.go:31] will retry after 197.63251ms: waiting for machine to come up
	I0920 19:04:44.919026  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.919516  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.919547  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:44.919475  304227 retry.go:31] will retry after 305.409091ms: waiting for machine to come up
	I0920 19:04:45.227550  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.228191  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.228224  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:45.228147  304227 retry.go:31] will retry after 311.72219ms: waiting for machine to come up
	I0920 19:04:45.541945  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.542374  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.542403  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:45.542344  304227 retry.go:31] will retry after 547.369471ms: waiting for machine to come up
	I0920 19:04:46.091199  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.091731  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.091765  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:46.091693  304227 retry.go:31] will retry after 519.190971ms: waiting for machine to come up
	I0920 19:04:46.612175  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.612641  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.612672  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:46.612591  304227 retry.go:31] will retry after 715.908704ms: waiting for machine to come up
	I0920 19:04:47.330911  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:47.331350  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:47.331379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:47.331294  304227 retry.go:31] will retry after 898.358136ms: waiting for machine to come up
	I0920 19:04:44.759090  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:44.762331  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:44.762696  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:44.762728  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:44.762954  302869 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 19:04:44.767209  302869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:04:44.781327  302869 kubeadm.go:883] updating cluster {Name:embed-certs-339897 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:04:44.781465  302869 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:04:44.781512  302869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:04:44.817356  302869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:04:44.817422  302869 ssh_runner.go:195] Run: which lz4
	I0920 19:04:44.821534  302869 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:04:44.826169  302869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:04:44.826205  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 19:04:46.160290  302869 crio.go:462] duration metric: took 1.338795677s to copy over tarball
	I0920 19:04:46.160379  302869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:04:48.265535  302869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.105118482s)
	I0920 19:04:48.265580  302869 crio.go:469] duration metric: took 2.105250135s to extract the tarball
	I0920 19:04:48.265588  302869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:04:48.302529  302869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:04:48.346391  302869 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:04:48.346419  302869 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:04:48.346427  302869 kubeadm.go:934] updating node { 192.168.72.72 8443 v1.31.1 crio true true} ...
	I0920 19:04:48.346566  302869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-339897 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:04:48.346668  302869 ssh_runner.go:195] Run: crio config
	I0920 19:04:48.396798  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:04:48.396824  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:04:48.396834  302869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:04:48.396866  302869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.72 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-339897 NodeName:embed-certs-339897 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:04:48.397043  302869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-339897"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:04:48.397121  302869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:04:48.407031  302869 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:04:48.407118  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:04:48.416554  302869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 19:04:48.432540  302869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:04:48.448042  302869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0920 19:04:48.465193  302869 ssh_runner.go:195] Run: grep 192.168.72.72	control-plane.minikube.internal$ /etc/hosts
	I0920 19:04:48.469083  302869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:04:48.481123  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:04:48.609883  302869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:04:48.627512  302869 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897 for IP: 192.168.72.72
	I0920 19:04:48.627545  302869 certs.go:194] generating shared ca certs ...
	I0920 19:04:48.627571  302869 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:04:48.627784  302869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:04:48.627851  302869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:04:48.627866  302869 certs.go:256] generating profile certs ...
	I0920 19:04:48.628032  302869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/client.key
	I0920 19:04:48.628143  302869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.key.308547ed
	I0920 19:04:48.628206  302869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.key
	I0920 19:04:48.628375  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:04:48.628421  302869 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:04:48.628435  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:04:48.628470  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:04:48.628509  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:04:48.628542  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:04:48.628616  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:04:48.629569  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:04:48.656203  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:04:48.708322  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:04:48.737686  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:04:48.772198  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 19:04:48.812086  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:04:48.836038  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:04:48.859972  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:04:48.883881  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:04:48.908399  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:04:48.930787  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:04:48.954052  302869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:04:48.970257  302869 ssh_runner.go:195] Run: openssl version
	I0920 19:04:48.976072  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:04:48.986449  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.990765  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.990833  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.996437  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:04:49.007111  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:04:49.017548  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.022044  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.022108  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.027752  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:04:49.038538  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:04:49.049445  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.054018  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.054100  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.059842  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:04:49.070748  302869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:04:49.075195  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:04:49.081100  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:04:49.086844  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:04:49.092790  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:04:49.098664  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:04:49.104562  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:04:49.110818  302869 kubeadm.go:392] StartCluster: {Name:embed-certs-339897 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:04:49.110952  302869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:04:49.111003  302869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:04:49.157700  302869 cri.go:89] found id: ""
	I0920 19:04:49.157774  302869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:04:49.168314  302869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:04:49.168339  302869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:04:49.168385  302869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:04:49.178632  302869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:04:49.179681  302869 kubeconfig.go:125] found "embed-certs-339897" server: "https://192.168.72.72:8443"
	I0920 19:04:49.181624  302869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:04:49.192084  302869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.72
	I0920 19:04:49.192159  302869 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:04:49.192188  302869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:04:49.192265  302869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:04:49.229141  302869 cri.go:89] found id: ""
	I0920 19:04:49.229232  302869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:04:49.247628  302869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:04:49.258190  302869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:04:49.258211  302869 kubeadm.go:157] found existing configuration files:
	
	I0920 19:04:49.258270  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:04:49.267769  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:04:49.267837  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:04:49.277473  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:04:49.286639  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:04:49.286712  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:04:49.296295  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:04:49.305705  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:04:49.305787  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:04:49.315191  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:04:49.324206  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:04:49.324288  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:04:49.334065  302869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:04:49.344823  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:48.231405  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:48.231846  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:48.231872  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:48.231795  304227 retry.go:31] will retry after 1.105264539s: waiting for machine to come up
	I0920 19:04:49.338940  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:49.339413  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:49.339437  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:49.339366  304227 retry.go:31] will retry after 1.638536651s: waiting for machine to come up
	I0920 19:04:50.980320  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:50.980774  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:50.980805  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:50.980714  304227 retry.go:31] will retry after 2.064766522s: waiting for machine to come up
	I0920 19:04:49.450454  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.412643  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.629144  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.694547  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.756897  302869 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:04:50.757008  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:51.258120  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:51.758025  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.258040  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.757302  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.774867  302869 api_server.go:72] duration metric: took 2.017964832s to wait for apiserver process to appear ...
	I0920 19:04:52.774906  302869 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:04:52.774954  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.383214  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:04:55.383255  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:04:55.383272  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.406625  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:04:55.406660  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:04:55.775825  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.785126  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:04:55.785157  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:04:56.275864  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:56.284002  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:04:56.284032  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:04:56.775547  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:56.779999  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 200:
	ok
	I0920 19:04:56.786034  302869 api_server.go:141] control plane version: v1.31.1
	I0920 19:04:56.786066  302869 api_server.go:131] duration metric: took 4.011153019s to wait for apiserver health ...
	I0920 19:04:56.786076  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:04:56.786082  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:04:56.788195  302869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:04:53.047487  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:53.048005  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:53.048027  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:53.047958  304227 retry.go:31] will retry after 2.829648578s: waiting for machine to come up
	I0920 19:04:55.879069  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:55.879538  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:55.879562  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:55.879488  304227 retry.go:31] will retry after 3.029828813s: waiting for machine to come up
	I0920 19:04:56.789703  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:04:56.799605  302869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:04:56.816974  302869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:04:56.828470  302869 system_pods.go:59] 8 kube-system pods found
	I0920 19:04:56.828582  302869 system_pods.go:61] "coredns-7c65d6cfc9-xnfsk" [5e34a8b9-d748-484a-92ab-0d288ab5f35e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:04:56.828610  302869 system_pods.go:61] "etcd-embed-certs-339897" [1d0e8303-0ab9-418c-ba2d-f0ba33abad36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:04:56.828637  302869 system_pods.go:61] "kube-apiserver-embed-certs-339897" [35569778-54b1-456d-8822-5a53a5e336fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:04:56.828655  302869 system_pods.go:61] "kube-controller-manager-embed-certs-339897" [6b9db655-59a1-4975-b3c7-fcc29a912850] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:04:56.828677  302869 system_pods.go:61] "kube-proxy-xs4nd" [a32f4c96-ae6e-4402-89c5-0226a4412d17] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:04:56.828694  302869 system_pods.go:61] "kube-scheduler-embed-certs-339897" [81dd07df-2ba9-4f8e-bb16-263bd6496a0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:04:56.828716  302869 system_pods.go:61] "metrics-server-6867b74b74-qqhcw" [b720a331-05ef-4528-bd25-0c1e7ef66b16] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:04:56.828729  302869 system_pods.go:61] "storage-provisioner" [08674813-f61d-49e9-a714-5f38b95f058e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:04:56.828738  302869 system_pods.go:74] duration metric: took 11.732519ms to wait for pod list to return data ...
	I0920 19:04:56.828748  302869 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:04:56.835747  302869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:04:56.835786  302869 node_conditions.go:123] node cpu capacity is 2
	I0920 19:04:56.835799  302869 node_conditions.go:105] duration metric: took 7.044914ms to run NodePressure ...
	I0920 19:04:56.835822  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:57.221422  302869 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:04:57.225575  302869 kubeadm.go:739] kubelet initialised
	I0920 19:04:57.225601  302869 kubeadm.go:740] duration metric: took 4.150722ms waiting for restarted kubelet to initialise ...
	I0920 19:04:57.225610  302869 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:04:57.230469  302869 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace to be "Ready" ...
	I0920 19:04:59.237961  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:58.911412  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:58.911990  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:58.912020  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:58.911956  304227 retry.go:31] will retry after 3.428044067s: waiting for machine to come up
	I0920 19:05:02.343216  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.343633  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Found IP for machine: 192.168.50.230
	I0920 19:05:02.343668  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has current primary IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.343679  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Reserving static IP address...
	I0920 19:05:02.344038  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Reserved static IP address: 192.168.50.230
	I0920 19:05:02.344084  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-612312", mac: "52:54:00:fa:2b:63", ip: "192.168.50.230"} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.344097  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for SSH to be available...
	I0920 19:05:02.344123  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | skip adding static IP to network mk-default-k8s-diff-port-612312 - found existing host DHCP lease matching {name: "default-k8s-diff-port-612312", mac: "52:54:00:fa:2b:63", ip: "192.168.50.230"}
	I0920 19:05:02.344136  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Getting to WaitForSSH function...
	I0920 19:05:02.346591  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.346932  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.346957  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.347110  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Using SSH client type: external
	I0920 19:05:02.347157  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa (-rw-------)
	I0920 19:05:02.347194  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:02.347214  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | About to run SSH command:
	I0920 19:05:02.347227  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | exit 0
	I0920 19:05:02.474040  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:02.474475  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetConfigRaw
	I0920 19:05:02.475160  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:02.477963  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.478338  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.478361  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.478680  303063 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/config.json ...
	I0920 19:05:02.478923  303063 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:02.478949  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:02.479166  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.481380  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.481759  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.481797  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.481961  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.482149  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.482307  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.482458  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.482619  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.482883  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.482900  303063 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:02.586360  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:02.586395  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.586694  303063 buildroot.go:166] provisioning hostname "default-k8s-diff-port-612312"
	I0920 19:05:02.586720  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.586951  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.589692  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.590053  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.590080  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.590230  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.590420  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.590563  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.590722  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.590936  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.591112  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.591126  303063 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-612312 && echo "default-k8s-diff-port-612312" | sudo tee /etc/hostname
	I0920 19:05:02.707768  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-612312
	
	I0920 19:05:02.707799  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.710647  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.711035  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.711064  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.711234  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.711448  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.711622  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.711791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.711938  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.712098  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.712116  303063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-612312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-612312/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-612312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:02.828234  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:02.828274  303063 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:02.828314  303063 buildroot.go:174] setting up certificates
	I0920 19:05:02.828327  303063 provision.go:84] configureAuth start
	I0920 19:05:02.828340  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.828700  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:02.831997  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.832469  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.832503  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.832704  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.835280  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.835577  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.835608  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.835699  303063 provision.go:143] copyHostCerts
	I0920 19:05:02.835766  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:02.835787  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:02.835848  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:02.835947  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:02.835955  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:02.835975  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:02.836026  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:02.836033  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:02.836055  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:02.836103  303063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-612312 san=[127.0.0.1 192.168.50.230 default-k8s-diff-port-612312 localhost minikube]
	I0920 19:05:02.983437  303063 provision.go:177] copyRemoteCerts
	I0920 19:05:02.983510  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:02.983541  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.986435  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.986791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.986835  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.987110  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.987289  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.987438  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.987579  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.674961  303486 start.go:364] duration metric: took 3m34.601349843s to acquireMachinesLock for "old-k8s-version-425599"
	I0920 19:05:03.675039  303486 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:05:03.675048  303486 fix.go:54] fixHost starting: 
	I0920 19:05:03.675480  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:03.675541  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:03.694201  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I0920 19:05:03.694642  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:03.695198  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:05:03.695221  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:03.695609  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:03.695793  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:03.695935  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetState
	I0920 19:05:03.697838  303486 fix.go:112] recreateIfNeeded on old-k8s-version-425599: state=Stopped err=<nil>
	I0920 19:05:03.697885  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	W0920 19:05:03.698080  303486 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:05:03.700333  303486 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-425599" ...
	I0920 19:05:03.701947  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .Start
	I0920 19:05:03.702184  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring networks are active...
	I0920 19:05:03.703106  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network default is active
	I0920 19:05:03.703645  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network mk-old-k8s-version-425599 is active
	I0920 19:05:03.704152  303486 main.go:141] libmachine: (old-k8s-version-425599) Getting domain xml...
	I0920 19:05:03.704942  303486 main.go:141] libmachine: (old-k8s-version-425599) Creating domain...
	I0920 19:05:01.738488  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:03.238934  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:03.238968  302869 pod_ready.go:82] duration metric: took 6.008471722s for pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.238978  302869 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.746041  302869 pod_ready.go:93] pod "etcd-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:03.746069  302869 pod_ready.go:82] duration metric: took 507.084418ms for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.746078  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.072306  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 19:05:03.096078  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:03.122027  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:03.150314  303063 provision.go:87] duration metric: took 321.970593ms to configureAuth
	I0920 19:05:03.150345  303063 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:03.150557  303063 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:03.150650  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.153988  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.154472  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.154524  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.154631  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.154840  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.155194  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.155397  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.155741  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:03.155990  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:03.156011  303063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:03.417981  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:03.418020  303063 machine.go:96] duration metric: took 939.078754ms to provisionDockerMachine
	I0920 19:05:03.418038  303063 start.go:293] postStartSetup for "default-k8s-diff-port-612312" (driver="kvm2")
	I0920 19:05:03.418052  303063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:03.418083  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.418456  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:03.418496  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.421689  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.422245  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.422282  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.422539  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.422747  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.422991  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.423144  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.509122  303063 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:03.515233  303063 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:03.515263  303063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:03.515343  303063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:03.515441  303063 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:03.515561  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:03.529346  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:03.559267  303063 start.go:296] duration metric: took 141.209592ms for postStartSetup
	I0920 19:05:03.559320  303063 fix.go:56] duration metric: took 20.196510123s for fixHost
	I0920 19:05:03.559348  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.563599  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.564320  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.564354  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.564605  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.564917  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.565120  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.565379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.565588  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:03.565813  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:03.565827  303063 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:03.674803  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859103.651785276
	
	I0920 19:05:03.674833  303063 fix.go:216] guest clock: 1726859103.651785276
	I0920 19:05:03.674840  303063 fix.go:229] Guest: 2024-09-20 19:05:03.651785276 +0000 UTC Remote: 2024-09-20 19:05:03.559326363 +0000 UTC m=+280.560675514 (delta=92.458913ms)
	I0920 19:05:03.674862  303063 fix.go:200] guest clock delta is within tolerance: 92.458913ms
	I0920 19:05:03.674867  303063 start.go:83] releasing machines lock for "default-k8s-diff-port-612312", held for 20.312097182s
	I0920 19:05:03.674897  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.675183  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:03.677975  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.678374  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.678406  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.678552  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679080  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679255  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679380  303063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:03.679429  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.679442  303063 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:03.679472  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.682443  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.682733  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.682876  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.682902  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.683014  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.683081  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.683104  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.683222  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.683326  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.683440  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.683512  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.683634  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.683721  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.683753  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.766786  303063 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:03.806684  303063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:03.950032  303063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:03.957153  303063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:03.957230  303063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:03.976784  303063 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:03.976814  303063 start.go:495] detecting cgroup driver to use...
	I0920 19:05:03.976902  303063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:03.994391  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:04.009961  303063 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:04.010021  303063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:04.023827  303063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:04.038585  303063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:04.157489  303063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:04.320396  303063 docker.go:233] disabling docker service ...
	I0920 19:05:04.320477  303063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:04.334865  303063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:04.350776  303063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:04.469438  303063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:04.596055  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:04.610548  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:04.629128  303063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:05:04.629192  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.640211  303063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:04.640289  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.650877  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.661863  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.672695  303063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:04.684141  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.696358  303063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.714936  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.726155  303063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:04.737400  303063 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:04.737460  303063 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:04.752752  303063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:04.767664  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:04.892509  303063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:04.992361  303063 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:04.992465  303063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:04.997119  303063 start.go:563] Will wait 60s for crictl version
	I0920 19:05:04.997215  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:05:05.001132  303063 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:05.050835  303063 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:05.050955  303063 ssh_runner.go:195] Run: crio --version
	I0920 19:05:05.079870  303063 ssh_runner.go:195] Run: crio --version
	I0920 19:05:05.112325  303063 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:05:05.113600  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:05.116591  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:05.117037  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:05.117075  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:05.117334  303063 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:05.122086  303063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:05.135489  303063 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-612312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:05.135682  303063 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:05:05.135776  303063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:05.174026  303063 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:05:05.174090  303063 ssh_runner.go:195] Run: which lz4
	I0920 19:05:05.179003  303063 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:05:05.184119  303063 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:05:05.184168  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 19:05:06.479331  303063 crio.go:462] duration metric: took 1.300388015s to copy over tarball
	I0920 19:05:06.479434  303063 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:05:05.040094  303486 main.go:141] libmachine: (old-k8s-version-425599) Waiting to get IP...
	I0920 19:05:05.041198  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.041615  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.041711  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.041616  304380 retry.go:31] will retry after 264.073086ms: waiting for machine to come up
	I0920 19:05:05.307229  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.307761  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.307784  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.307713  304380 retry.go:31] will retry after 317.541552ms: waiting for machine to come up
	I0920 19:05:05.627262  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.627903  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.627929  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.627797  304380 retry.go:31] will retry after 432.236037ms: waiting for machine to come up
	I0920 19:05:06.062368  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:06.062842  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:06.062873  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:06.062804  304380 retry.go:31] will retry after 525.683807ms: waiting for machine to come up
	I0920 19:05:06.590915  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:06.591405  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:06.591434  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:06.591355  304380 retry.go:31] will retry after 542.00244ms: waiting for machine to come up
	I0920 19:05:07.135388  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:07.135944  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:07.135998  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:07.135908  304380 retry.go:31] will retry after 886.798885ms: waiting for machine to come up
	I0920 19:05:08.024147  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:08.024684  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:08.024713  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:08.024596  304380 retry.go:31] will retry after 826.869965ms: waiting for machine to come up
	I0920 19:05:08.853176  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:08.853793  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:08.853828  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:08.853736  304380 retry.go:31] will retry after 1.007422775s: waiting for machine to come up
	I0920 19:05:05.756992  302869 pod_ready.go:103] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:08.255312  302869 pod_ready.go:103] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:08.656490  303063 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1770136s)
	I0920 19:05:08.656529  303063 crio.go:469] duration metric: took 2.177156837s to extract the tarball
	I0920 19:05:08.656539  303063 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:05:08.693153  303063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:08.733444  303063 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:05:08.733473  303063 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:05:08.733484  303063 kubeadm.go:934] updating node { 192.168.50.230 8444 v1.31.1 crio true true} ...
	I0920 19:05:08.733624  303063 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-612312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:05:08.733710  303063 ssh_runner.go:195] Run: crio config
	I0920 19:05:08.777872  303063 cni.go:84] Creating CNI manager for ""
	I0920 19:05:08.777913  303063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:08.777927  303063 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:05:08.777957  303063 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.230 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-612312 NodeName:default-k8s-diff-port-612312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:05:08.778143  303063 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.230
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-612312"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:05:08.778220  303063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:05:08.788133  303063 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:05:08.788208  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:05:08.797461  303063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0920 19:05:08.814111  303063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:05:08.832188  303063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0920 19:05:08.849801  303063 ssh_runner.go:195] Run: grep 192.168.50.230	control-plane.minikube.internal$ /etc/hosts
	I0920 19:05:08.853809  303063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:08.865685  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:08.985881  303063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:09.002387  303063 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312 for IP: 192.168.50.230
	I0920 19:05:09.002417  303063 certs.go:194] generating shared ca certs ...
	I0920 19:05:09.002441  303063 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:09.002656  303063 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:05:09.002727  303063 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:05:09.002741  303063 certs.go:256] generating profile certs ...
	I0920 19:05:09.002859  303063 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/client.key
	I0920 19:05:09.002940  303063 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.key.637d18af
	I0920 19:05:09.002990  303063 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.key
	I0920 19:05:09.003207  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:05:09.003248  303063 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:05:09.003256  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:05:09.003278  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:05:09.003306  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:05:09.003328  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:05:09.003365  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:09.004030  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:05:09.037203  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:05:09.068858  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:05:09.095082  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:05:09.122167  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 19:05:09.147953  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:05:09.174251  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:05:09.202438  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:05:09.231354  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:05:09.256365  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:05:09.282589  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:05:09.308610  303063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:05:09.328798  303063 ssh_runner.go:195] Run: openssl version
	I0920 19:05:09.334685  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:05:09.345947  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.350772  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.350838  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.356595  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:05:09.367559  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:05:09.380638  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.385362  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.385429  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.391299  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:05:09.402065  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:05:09.412841  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.417074  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.417138  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.422761  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:05:09.433780  303063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:05:09.438734  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:05:09.444888  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:05:09.450715  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:05:09.456993  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:05:09.462716  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:05:09.468847  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:05:09.474680  303063 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-612312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:05:09.474780  303063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:05:09.474844  303063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:09.513886  303063 cri.go:89] found id: ""
	I0920 19:05:09.514006  303063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:09.524385  303063 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:05:09.524417  303063 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:05:09.524479  303063 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:05:09.534288  303063 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:05:09.535251  303063 kubeconfig.go:125] found "default-k8s-diff-port-612312" server: "https://192.168.50.230:8444"
	I0920 19:05:09.537293  303063 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:05:09.547753  303063 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.230
	I0920 19:05:09.547796  303063 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:05:09.547812  303063 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:05:09.547890  303063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:09.590656  303063 cri.go:89] found id: ""
	I0920 19:05:09.590743  303063 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:05:09.607426  303063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:05:09.617258  303063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:05:09.617280  303063 kubeadm.go:157] found existing configuration files:
	
	I0920 19:05:09.617344  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 19:05:09.626725  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:05:09.626813  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:05:09.636421  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 19:05:09.645711  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:05:09.645780  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:05:09.655351  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 19:05:09.664771  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:05:09.664833  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:05:09.674556  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 19:05:09.683677  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:05:09.683821  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:05:09.695159  303063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:05:09.704995  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:09.821398  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.642045  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.870266  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.935191  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:11.015669  303063 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:05:11.015787  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:11.516670  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:12.016486  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:12.516070  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:13.016012  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:13.031718  303063 api_server.go:72] duration metric: took 2.016048489s to wait for apiserver process to appear ...
	I0920 19:05:13.031752  303063 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:05:13.031781  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:13.032414  303063 api_server.go:269] stopped: https://192.168.50.230:8444/healthz: Get "https://192.168.50.230:8444/healthz": dial tcp 192.168.50.230:8444: connect: connection refused
	I0920 19:05:09.863227  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:09.863693  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:09.863721  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:09.863640  304380 retry.go:31] will retry after 1.556199895s: waiting for machine to come up
	I0920 19:05:11.422510  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:11.423244  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:11.423271  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:11.423179  304380 retry.go:31] will retry after 1.670177778s: waiting for machine to come up
	I0920 19:05:13.095982  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:13.096600  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:13.096626  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:13.096545  304380 retry.go:31] will retry after 2.71780554s: waiting for machine to come up
	I0920 19:05:10.256325  302869 pod_ready.go:93] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.256352  302869 pod_ready.go:82] duration metric: took 6.510267221s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.256361  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.263229  302869 pod_ready.go:93] pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.263254  302869 pod_ready.go:82] duration metric: took 6.886052ms for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.263264  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xs4nd" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.270014  302869 pod_ready.go:93] pod "kube-proxy-xs4nd" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.270040  302869 pod_ready.go:82] duration metric: took 6.769102ms for pod "kube-proxy-xs4nd" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.270049  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.277232  302869 pod_ready.go:93] pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.277262  302869 pod_ready.go:82] duration metric: took 7.203732ms for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.277275  302869 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:12.284083  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:14.284983  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:13.532830  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:15.579530  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:05:15.579567  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:05:15.579584  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:15.596526  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:05:15.596570  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:05:16.032011  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:16.039310  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:05:16.039346  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:05:16.531881  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:16.536703  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:05:16.536736  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:05:17.032322  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:17.036979  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 200:
	ok
	I0920 19:05:17.043667  303063 api_server.go:141] control plane version: v1.31.1
	I0920 19:05:17.043701  303063 api_server.go:131] duration metric: took 4.011936277s to wait for apiserver health ...
	I0920 19:05:17.043710  303063 cni.go:84] Creating CNI manager for ""
	I0920 19:05:17.043716  303063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:17.045376  303063 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:05:17.046579  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:05:17.056771  303063 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:05:17.076571  303063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:05:17.085546  303063 system_pods.go:59] 8 kube-system pods found
	I0920 19:05:17.085584  303063 system_pods.go:61] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:05:17.085591  303063 system_pods.go:61] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:05:17.085597  303063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:05:17.085608  303063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:05:17.085615  303063 system_pods.go:61] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:05:17.085624  303063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:05:17.085631  303063 system_pods.go:61] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:05:17.085638  303063 system_pods.go:61] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:05:17.085646  303063 system_pods.go:74] duration metric: took 9.051189ms to wait for pod list to return data ...
	I0920 19:05:17.085657  303063 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:05:17.089161  303063 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:05:17.089190  303063 node_conditions.go:123] node cpu capacity is 2
	I0920 19:05:17.089201  303063 node_conditions.go:105] duration metric: took 3.534622ms to run NodePressure ...
	I0920 19:05:17.089218  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:17.442957  303063 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:05:17.447222  303063 kubeadm.go:739] kubelet initialised
	I0920 19:05:17.447247  303063 kubeadm.go:740] duration metric: took 4.255349ms waiting for restarted kubelet to initialise ...
	I0920 19:05:17.447255  303063 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:17.451839  303063 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.457216  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.457240  303063 pod_ready.go:82] duration metric: took 5.361636ms for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.457250  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.457256  303063 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.462245  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.462273  303063 pod_ready.go:82] duration metric: took 5.009342ms for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.462313  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.462326  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.468060  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.468087  303063 pod_ready.go:82] duration metric: took 5.75409ms for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.468099  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.468105  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.479703  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.479727  303063 pod_ready.go:82] duration metric: took 11.614638ms for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.479739  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.479750  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.879555  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-proxy-zp8l5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.879582  303063 pod_ready.go:82] duration metric: took 399.824208ms for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.879592  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-proxy-zp8l5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.879599  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:18.281551  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.281585  303063 pod_ready.go:82] duration metric: took 401.976884ms for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:18.281601  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.281611  303063 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:18.680674  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.680711  303063 pod_ready.go:82] duration metric: took 399.091849ms for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:18.680723  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.680730  303063 pod_ready.go:39] duration metric: took 1.233465539s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:18.680747  303063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:05:18.692948  303063 ops.go:34] apiserver oom_adj: -16
	I0920 19:05:18.692970  303063 kubeadm.go:597] duration metric: took 9.168545987s to restartPrimaryControlPlane
	I0920 19:05:18.692981  303063 kubeadm.go:394] duration metric: took 9.218309896s to StartCluster
	I0920 19:05:18.692999  303063 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:18.693078  303063 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:05:18.694921  303063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:18.695293  303063 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:05:18.696157  303063 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:18.696187  303063 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:05:18.696357  303063 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696377  303063 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.696387  303063 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:05:18.696419  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.696449  303063 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696495  303063 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-612312"
	I0920 19:05:18.696506  303063 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696588  303063 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.696610  303063 addons.go:243] addon metrics-server should already be in state true
	I0920 19:05:18.696709  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.697239  303063 out.go:177] * Verifying Kubernetes components...
	I0920 19:05:18.697334  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697386  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.697409  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697409  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697442  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.697531  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.698927  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:18.713346  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I0920 19:05:18.713346  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35127
	I0920 19:05:18.713967  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.714000  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.714472  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.714491  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.714572  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.714588  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.714961  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.714965  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.715163  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.715842  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.715893  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.717732  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I0920 19:05:18.718289  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.718553  303063 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.718575  303063 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:05:18.718604  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.718827  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.718852  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.718926  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.718956  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.719243  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.719782  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.719826  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.733219  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0920 19:05:18.733789  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.734403  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.734422  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.734463  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41091
	I0920 19:05:18.734905  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.734993  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.735207  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.735363  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.735394  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.735703  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.736264  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.736321  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.737489  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.739977  303063 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:05:18.740477  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39547
	I0920 19:05:18.741217  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.741752  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:05:18.741770  303063 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:05:18.741791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.741854  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.741875  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.742351  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.742547  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.744800  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.746006  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.746416  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.746442  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.746695  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.746961  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.746974  303063 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:15.815519  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:15.816035  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:15.816065  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:15.815974  304380 retry.go:31] will retry after 2.62788631s: waiting for machine to come up
	I0920 19:05:18.446768  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:18.447219  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:18.447240  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:18.447166  304380 retry.go:31] will retry after 4.025841071s: waiting for machine to come up
	I0920 19:05:16.784503  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:18.785829  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:18.747159  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.747332  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.748881  303063 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:05:18.748901  303063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:05:18.748932  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.752335  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.752787  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.752812  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.753180  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.753340  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.753491  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.753628  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.755106  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0920 19:05:18.755543  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.756159  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.756182  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.756521  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.756710  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.758400  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.758674  303063 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:05:18.758690  303063 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:05:18.758707  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.762208  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.762748  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.762776  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.762950  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.763235  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.763518  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.763678  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.900876  303063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:18.919923  303063 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-612312" to be "Ready" ...
	I0920 19:05:18.993779  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:05:18.993814  303063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:05:19.001703  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:05:19.019424  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:05:19.054174  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:05:19.054202  303063 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:05:19.123651  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:05:19.123682  303063 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:05:19.186745  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:05:19.369866  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.369898  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.370210  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.370229  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:19.370246  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.370270  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.370552  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.370593  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:19.370625  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:19.380105  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.380140  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.380456  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.380472  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.145346  303063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.12587258s)
	I0920 19:05:20.145412  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.145427  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.145769  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:20.145834  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.145846  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.145866  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.145877  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.146126  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.146144  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152067  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.152084  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.152361  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.152379  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152388  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.152395  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.152625  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.152662  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:20.152711  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152729  303063 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-612312"
	I0920 19:05:20.154940  303063 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 19:05:20.156326  303063 addons.go:510] duration metric: took 1.460148296s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 19:05:20.923687  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:22.924271  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:23.791151  302538 start.go:364] duration metric: took 54.811585482s to acquireMachinesLock for "no-preload-037711"
	I0920 19:05:23.791208  302538 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:05:23.791219  302538 fix.go:54] fixHost starting: 
	I0920 19:05:23.791657  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:23.791696  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:23.809350  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0920 19:05:23.809873  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:23.810520  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:05:23.810555  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:23.810893  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:23.811118  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:23.811286  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:05:23.812885  302538 fix.go:112] recreateIfNeeded on no-preload-037711: state=Stopped err=<nil>
	I0920 19:05:23.812914  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	W0920 19:05:23.813135  302538 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:05:23.815287  302538 out.go:177] * Restarting existing kvm2 VM for "no-preload-037711" ...
	I0920 19:05:22.477850  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.478419  303486 main.go:141] libmachine: (old-k8s-version-425599) Found IP for machine: 192.168.39.53
	I0920 19:05:22.478454  303486 main.go:141] libmachine: (old-k8s-version-425599) Reserving static IP address...
	I0920 19:05:22.478473  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has current primary IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.478983  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "old-k8s-version-425599", mac: "52:54:00:d2:a5:70", ip: "192.168.39.53"} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.479021  303486 main.go:141] libmachine: (old-k8s-version-425599) Reserved static IP address: 192.168.39.53
	I0920 19:05:22.479040  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | skip adding static IP to network mk-old-k8s-version-425599 - found existing host DHCP lease matching {name: "old-k8s-version-425599", mac: "52:54:00:d2:a5:70", ip: "192.168.39.53"}
	I0920 19:05:22.479055  303486 main.go:141] libmachine: (old-k8s-version-425599) Waiting for SSH to be available...
	I0920 19:05:22.479067  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Getting to WaitForSSH function...
	I0920 19:05:22.481118  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.481359  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.481382  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.481556  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH client type: external
	I0920 19:05:22.481570  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa (-rw-------)
	I0920 19:05:22.481600  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:22.481612  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | About to run SSH command:
	I0920 19:05:22.481627  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | exit 0
	I0920 19:05:22.606383  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:22.606783  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetConfigRaw
	I0920 19:05:22.607408  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:22.610155  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.610474  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.610506  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.610784  303486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 19:05:22.611075  303486 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:22.611103  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:22.611332  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.613838  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.614250  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.614283  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.614395  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.614609  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.614776  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.614950  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.615136  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.615331  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.615344  303486 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:22.718330  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:22.718363  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.718651  303486 buildroot.go:166] provisioning hostname "old-k8s-version-425599"
	I0920 19:05:22.718697  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.718913  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.722027  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.722334  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.722370  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.722559  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.722738  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.722909  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.723086  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.723261  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.723473  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.723491  303486 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-425599 && echo "old-k8s-version-425599" | sudo tee /etc/hostname
	I0920 19:05:22.841563  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-425599
	
	I0920 19:05:22.841592  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.844327  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.844716  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.844748  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.844970  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.845154  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.845306  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.845413  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.845570  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.845793  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.845818  303486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-425599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-425599/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-425599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:22.959542  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:22.959572  303486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:22.959615  303486 buildroot.go:174] setting up certificates
	I0920 19:05:22.959625  303486 provision.go:84] configureAuth start
	I0920 19:05:22.959635  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.959972  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:22.962506  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.962845  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.962883  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.963020  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.965352  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.965734  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.965755  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.965936  303486 provision.go:143] copyHostCerts
	I0920 19:05:22.965999  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:22.966018  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:22.966073  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:22.966165  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:22.966173  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:22.966193  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:22.966250  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:22.966257  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:22.966274  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:22.966368  303486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-425599 san=[127.0.0.1 192.168.39.53 localhost minikube old-k8s-version-425599]
	I0920 19:05:23.156245  303486 provision.go:177] copyRemoteCerts
	I0920 19:05:23.156322  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:23.156356  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.159694  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.160062  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.160105  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.160283  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.160467  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.160633  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.160755  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.244439  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:23.271796  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 19:05:23.298124  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:23.323466  303486 provision.go:87] duration metric: took 363.82725ms to configureAuth
	I0920 19:05:23.323496  303486 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:23.323711  303486 config.go:182] Loaded profile config "old-k8s-version-425599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:05:23.323805  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.326985  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.327336  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.327363  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.327573  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.327788  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.328003  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.328161  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.328315  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:23.328492  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:23.328506  303486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:23.559721  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:23.559755  303486 machine.go:96] duration metric: took 948.663189ms to provisionDockerMachine
	I0920 19:05:23.559770  303486 start.go:293] postStartSetup for "old-k8s-version-425599" (driver="kvm2")
	I0920 19:05:23.559781  303486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:23.559812  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.560186  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:23.560225  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.563146  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.563462  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.563491  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.563786  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.564018  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.564214  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.564365  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.645013  303486 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:23.649198  303486 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:23.649230  303486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:23.649300  303486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:23.649416  303486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:23.649544  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:23.659351  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:23.683405  303486 start.go:296] duration metric: took 123.617289ms for postStartSetup
	I0920 19:05:23.683466  303486 fix.go:56] duration metric: took 20.008417985s for fixHost
	I0920 19:05:23.683495  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.686540  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.686962  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.686988  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.687209  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.687445  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.687624  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.687803  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.688001  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:23.688188  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:23.688206  303486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:23.790992  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859123.767729644
	
	I0920 19:05:23.791024  303486 fix.go:216] guest clock: 1726859123.767729644
	I0920 19:05:23.791035  303486 fix.go:229] Guest: 2024-09-20 19:05:23.767729644 +0000 UTC Remote: 2024-09-20 19:05:23.683472425 +0000 UTC m=+234.770765310 (delta=84.257219ms)
	I0920 19:05:23.791061  303486 fix.go:200] guest clock delta is within tolerance: 84.257219ms
	I0920 19:05:23.791068  303486 start.go:83] releasing machines lock for "old-k8s-version-425599", held for 20.116056408s
	I0920 19:05:23.791101  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.791432  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:23.794540  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.795015  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.795048  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.795226  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.795779  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.795992  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.796129  303486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:23.796180  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.796241  303486 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:23.796265  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.799032  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799374  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.799399  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799418  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799540  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.799743  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.799874  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.799890  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.799906  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.800084  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.800077  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.800198  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.800365  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.800514  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.924885  303486 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:23.932642  303486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:21.284671  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:23.284813  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:24.083860  303486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:24.090360  303486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:24.090444  303486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:24.112281  303486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:24.112310  303486 start.go:495] detecting cgroup driver to use...
	I0920 19:05:24.112383  303486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:24.136600  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:24.154552  303486 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:24.154631  303486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:24.170600  303486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:24.186071  303486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:24.319752  303486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:24.498299  303486 docker.go:233] disabling docker service ...
	I0920 19:05:24.498385  303486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:24.515762  303486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:24.533482  303486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:24.687481  303486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:24.820191  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:24.835255  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:24.856179  303486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 19:05:24.856253  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.868991  303486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:24.869080  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.884074  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.898732  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.911016  303486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:24.922757  303486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:24.937719  303486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:24.937828  303486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:24.955496  303486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:24.966347  303486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:25.114758  303486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:25.226807  303486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:25.226984  303486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:25.234576  303486 start.go:563] Will wait 60s for crictl version
	I0920 19:05:25.234664  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:25.238739  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:25.282242  303486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:25.282344  303486 ssh_runner.go:195] Run: crio --version
	I0920 19:05:25.317733  303486 ssh_runner.go:195] Run: crio --version
	I0920 19:05:25.353767  303486 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 19:05:23.816707  302538 main.go:141] libmachine: (no-preload-037711) Calling .Start
	I0920 19:05:23.817003  302538 main.go:141] libmachine: (no-preload-037711) Ensuring networks are active...
	I0920 19:05:23.817953  302538 main.go:141] libmachine: (no-preload-037711) Ensuring network default is active
	I0920 19:05:23.818345  302538 main.go:141] libmachine: (no-preload-037711) Ensuring network mk-no-preload-037711 is active
	I0920 19:05:23.818824  302538 main.go:141] libmachine: (no-preload-037711) Getting domain xml...
	I0920 19:05:23.819705  302538 main.go:141] libmachine: (no-preload-037711) Creating domain...
	I0920 19:05:25.216298  302538 main.go:141] libmachine: (no-preload-037711) Waiting to get IP...
	I0920 19:05:25.217452  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.218073  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.218138  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.218047  304582 retry.go:31] will retry after 256.299732ms: waiting for machine to come up
	I0920 19:05:25.475745  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.476451  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.476485  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.476388  304582 retry.go:31] will retry after 298.732749ms: waiting for machine to come up
	I0920 19:05:25.777093  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.777731  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.777755  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.777701  304582 retry.go:31] will retry after 360.011383ms: waiting for machine to come up
	I0920 19:05:26.139480  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:26.140100  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:26.140132  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:26.140049  304582 retry.go:31] will retry after 593.756705ms: waiting for machine to come up
	I0920 19:05:24.924455  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:26.425132  303063 node_ready.go:49] node "default-k8s-diff-port-612312" has status "Ready":"True"
	I0920 19:05:26.425165  303063 node_ready.go:38] duration metric: took 7.505210484s for node "default-k8s-diff-port-612312" to be "Ready" ...
	I0920 19:05:26.425181  303063 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:26.433394  303063 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:26.440462  303063 pod_ready.go:93] pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:26.440497  303063 pod_ready.go:82] duration metric: took 7.072952ms for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:26.440513  303063 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:25.354959  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:25.358179  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:25.358467  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:25.358495  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:25.358739  303486 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:25.362714  303486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:25.375880  303486 kubeadm.go:883] updating cluster {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:25.376024  303486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:05:25.376074  303486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:25.420224  303486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:05:25.420307  303486 ssh_runner.go:195] Run: which lz4
	I0920 19:05:25.424775  303486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:05:25.430102  303486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:05:25.430151  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 19:05:27.014068  303486 crio.go:462] duration metric: took 1.589333502s to copy over tarball
	I0920 19:05:27.014160  303486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:05:25.786282  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:27.788058  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:26.735924  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:26.736558  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:26.736582  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:26.736458  304582 retry.go:31] will retry after 712.118443ms: waiting for machine to come up
	I0920 19:05:27.450059  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:27.450696  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:27.450719  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:27.450592  304582 retry.go:31] will retry after 588.649809ms: waiting for machine to come up
	I0920 19:05:28.041216  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:28.041760  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:28.041791  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:28.041691  304582 retry.go:31] will retry after 869.42079ms: waiting for machine to come up
	I0920 19:05:28.912809  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:28.913240  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:28.913265  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:28.913174  304582 retry.go:31] will retry after 1.410011475s: waiting for machine to come up
	I0920 19:05:30.324367  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:30.324952  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:30.324980  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:30.324875  304582 retry.go:31] will retry after 1.398358739s: waiting for machine to come up
	I0920 19:05:28.454512  303063 pod_ready.go:103] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:31.546557  303063 pod_ready.go:103] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:32.072690  303063 pod_ready.go:93] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.072719  303063 pod_ready.go:82] duration metric: took 5.632196538s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.072734  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.081029  303063 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.081062  303063 pod_ready.go:82] duration metric: took 8.319382ms for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.081076  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.087314  303063 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.087338  303063 pod_ready.go:82] duration metric: took 6.253184ms for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.087351  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.093286  303063 pod_ready.go:93] pod "kube-proxy-zp8l5" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.093313  303063 pod_ready.go:82] duration metric: took 5.953425ms for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.093326  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.098529  303063 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.098553  303063 pod_ready.go:82] duration metric: took 5.218413ms for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.098565  303063 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:30.096727  303486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.082523066s)
	I0920 19:05:30.096778  303486 crio.go:469] duration metric: took 3.082671461s to extract the tarball
	I0920 19:05:30.096789  303486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:05:30.148059  303486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:30.184547  303486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:05:30.184578  303486 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 19:05:30.184672  303486 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:30.184686  303486 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.184711  303486 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.184730  303486 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.184732  303486 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.184686  303486 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 19:05:30.184693  303486 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.184792  303486 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.186558  303486 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:30.186609  303486 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 19:05:30.186607  303486 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.186616  303486 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.186688  303486 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.186698  303486 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.186701  303486 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.186565  303486 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.425283  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 19:05:30.469378  303486 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 19:05:30.469448  303486 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 19:05:30.469514  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.475453  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.493250  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.505003  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.513203  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.514365  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.521729  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.533265  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.580710  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.613984  303486 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 19:05:30.614033  303486 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.614085  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.653094  303486 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 19:05:30.653150  303486 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.653205  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675697  303486 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 19:05:30.675730  303486 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 19:05:30.675752  303486 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.675762  303486 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.675805  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675820  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675805  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.709199  303486 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 19:05:30.709261  303486 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.709310  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.720146  303486 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 19:05:30.720198  303486 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.720233  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.720313  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.720241  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.720374  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.720247  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.737444  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.737487  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 19:05:30.843272  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.843362  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.843366  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.860414  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.860462  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.860430  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.954641  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.982227  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.982263  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:31.041996  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:31.042032  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:31.042650  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:31.042722  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:31.070786  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 19:05:31.120407  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 19:05:31.135751  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 19:05:31.163591  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 19:05:31.164483  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 19:05:31.164587  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 19:05:31.345957  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:31.486337  303486 cache_images.go:92] duration metric: took 1.301737533s to LoadCachedImages
	W0920 19:05:31.486434  303486 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 19:05:31.486452  303486 kubeadm.go:934] updating node { 192.168.39.53 8443 v1.20.0 crio true true} ...
	I0920 19:05:31.486576  303486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-425599 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:05:31.486661  303486 ssh_runner.go:195] Run: crio config
	I0920 19:05:31.544181  303486 cni.go:84] Creating CNI manager for ""
	I0920 19:05:31.544215  303486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:31.544229  303486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:05:31.544257  303486 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-425599 NodeName:old-k8s-version-425599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 19:05:31.544465  303486 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-425599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:05:31.544556  303486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 19:05:31.559445  303486 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:05:31.559542  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:05:31.570446  303486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0920 19:05:31.588741  303486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:05:31.606454  303486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0920 19:05:31.624483  303486 ssh_runner.go:195] Run: grep 192.168.39.53	control-plane.minikube.internal$ /etc/hosts
	I0920 19:05:31.628285  303486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:31.641039  303486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:31.771690  303486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:31.789746  303486 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599 for IP: 192.168.39.53
	I0920 19:05:31.789775  303486 certs.go:194] generating shared ca certs ...
	I0920 19:05:31.789806  303486 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:31.790074  303486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:05:31.790150  303486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:05:31.790165  303486 certs.go:256] generating profile certs ...
	I0920 19:05:31.798117  303486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/client.key
	I0920 19:05:31.798270  303486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key.e78cb154
	I0920 19:05:31.798333  303486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key
	I0920 19:05:31.798499  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:05:31.798543  303486 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:05:31.798557  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:05:31.798608  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:05:31.798659  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:05:31.798692  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:05:31.798748  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:31.799624  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:05:31.843298  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:05:31.877299  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:05:31.909777  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:05:31.947787  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 19:05:31.991175  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:05:32.019393  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:05:32.048475  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:05:32.084354  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:05:32.112161  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:05:32.138991  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:05:32.167653  303486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:05:32.185485  303486 ssh_runner.go:195] Run: openssl version
	I0920 19:05:32.192030  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:05:32.203761  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.209550  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.209650  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.216277  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:05:32.228192  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:05:32.239984  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.244782  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.244848  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.250865  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:05:32.262035  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:05:32.273790  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.279335  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.279414  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.286501  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:05:32.298052  303486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:05:32.303064  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:05:32.309973  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:05:32.316704  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:05:32.323166  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:05:32.330126  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:05:32.336554  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:05:32.343303  303486 kubeadm.go:392] StartCluster: {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:05:32.343413  303486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:05:32.343473  303486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:32.387562  303486 cri.go:89] found id: ""
	I0920 19:05:32.387653  303486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:32.398143  303486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:05:32.398167  303486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:05:32.398222  303486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:05:32.407776  303486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:05:32.409205  303486 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-425599" does not appear in /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:05:32.410267  303486 kubeconfig.go:62] /home/jenkins/minikube-integration/19679-237658/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-425599" cluster setting kubeconfig missing "old-k8s-version-425599" context setting]
	I0920 19:05:32.411776  303486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:32.457074  303486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:05:32.468055  303486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.53
	I0920 19:05:32.468113  303486 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:05:32.468132  303486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:05:32.468211  303486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:32.505151  303486 cri.go:89] found id: ""
	I0920 19:05:32.505241  303486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:05:32.521391  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:05:32.531705  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:05:32.531728  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:05:32.531774  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:05:32.541137  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:05:32.541219  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:05:32.550684  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:05:32.560262  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:05:32.560352  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:05:32.569735  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:05:32.579126  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:05:32.579199  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:05:32.589508  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:05:32.600985  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:05:32.601100  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:05:32.611511  303486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:05:32.622346  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:32.755562  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:33.793472  303486 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.037864747s)
	I0920 19:05:33.793513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:30.283826  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:32.285077  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:31.725721  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:31.726171  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:31.726198  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:31.726127  304582 retry.go:31] will retry after 2.32427136s: waiting for machine to come up
	I0920 19:05:34.052412  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:34.053005  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:34.053043  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:34.052923  304582 retry.go:31] will retry after 2.159036217s: waiting for machine to come up
	I0920 19:05:36.215059  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:36.215561  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:36.215585  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:36.215501  304582 retry.go:31] will retry after 3.424610182s: waiting for machine to come up
	I0920 19:05:34.105780  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:36.106491  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:34.021260  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:34.142176  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:34.235507  303486 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:05:34.235618  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:34.736586  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:35.236065  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:35.735783  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:36.236406  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:36.736243  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:37.235994  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:37.736168  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:38.236559  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:38.736139  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:34.784743  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:37.282598  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.284890  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.642163  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:39.642600  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:39.642642  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:39.642541  304582 retry.go:31] will retry after 3.073679854s: waiting for machine to come up
	I0920 19:05:38.116192  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:40.605958  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.236010  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:39.735723  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:40.236003  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:40.735741  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.235689  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.736411  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:42.236028  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:42.735814  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:43.236391  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:43.736174  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.783707  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:43.784197  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:42.719195  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.719748  302538 main.go:141] libmachine: (no-preload-037711) Found IP for machine: 192.168.61.136
	I0920 19:05:42.719775  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has current primary IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.719780  302538 main.go:141] libmachine: (no-preload-037711) Reserving static IP address...
	I0920 19:05:42.720201  302538 main.go:141] libmachine: (no-preload-037711) Reserved static IP address: 192.168.61.136
	I0920 19:05:42.720220  302538 main.go:141] libmachine: (no-preload-037711) Waiting for SSH to be available...
	I0920 19:05:42.720239  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "no-preload-037711", mac: "52:54:00:b0:ac:14", ip: "192.168.61.136"} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.720268  302538 main.go:141] libmachine: (no-preload-037711) DBG | skip adding static IP to network mk-no-preload-037711 - found existing host DHCP lease matching {name: "no-preload-037711", mac: "52:54:00:b0:ac:14", ip: "192.168.61.136"}
	I0920 19:05:42.720280  302538 main.go:141] libmachine: (no-preload-037711) DBG | Getting to WaitForSSH function...
	I0920 19:05:42.722402  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.722661  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.722686  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.722864  302538 main.go:141] libmachine: (no-preload-037711) DBG | Using SSH client type: external
	I0920 19:05:42.722877  302538 main.go:141] libmachine: (no-preload-037711) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa (-rw-------)
	I0920 19:05:42.722939  302538 main.go:141] libmachine: (no-preload-037711) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:42.722962  302538 main.go:141] libmachine: (no-preload-037711) DBG | About to run SSH command:
	I0920 19:05:42.722979  302538 main.go:141] libmachine: (no-preload-037711) DBG | exit 0
	I0920 19:05:42.850057  302538 main.go:141] libmachine: (no-preload-037711) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:42.850451  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetConfigRaw
	I0920 19:05:42.851176  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:42.853807  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.854268  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.854290  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.854558  302538 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/config.json ...
	I0920 19:05:42.854764  302538 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:42.854782  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:42.854999  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:42.857347  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.857683  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.857712  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.857892  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:42.858073  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.858242  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.858385  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:42.858569  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:42.858755  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:42.858766  302538 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:42.962098  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:42.962137  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:42.962455  302538 buildroot.go:166] provisioning hostname "no-preload-037711"
	I0920 19:05:42.962488  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:42.962696  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:42.965410  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.965793  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.965822  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.965954  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:42.966128  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.966285  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.966442  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:42.966650  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:42.966822  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:42.966847  302538 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-037711 && echo "no-preload-037711" | sudo tee /etc/hostname
	I0920 19:05:43.089291  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-037711
	
	I0920 19:05:43.089338  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.092213  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.092658  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.092689  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.092828  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.093031  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.093188  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.093305  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.093478  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.093692  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.093719  302538 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-037711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-037711/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-037711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:43.210625  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:43.210660  302538 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:43.210720  302538 buildroot.go:174] setting up certificates
	I0920 19:05:43.210740  302538 provision.go:84] configureAuth start
	I0920 19:05:43.210758  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:43.211093  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:43.213829  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.214346  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.214379  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.214542  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.216979  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.217294  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.217319  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.217461  302538 provision.go:143] copyHostCerts
	I0920 19:05:43.217526  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:43.217546  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:43.217610  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:43.217708  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:43.217720  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:43.217750  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:43.217885  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:43.217899  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:43.217947  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:43.218008  302538 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.no-preload-037711 san=[127.0.0.1 192.168.61.136 localhost minikube no-preload-037711]
	I0920 19:05:43.395507  302538 provision.go:177] copyRemoteCerts
	I0920 19:05:43.395576  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:43.395607  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.398288  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.398663  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.398694  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.398899  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.399087  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.399205  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.399324  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:43.488543  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 19:05:43.514793  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:43.537520  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:43.561983  302538 provision.go:87] duration metric: took 351.22541ms to configureAuth
	I0920 19:05:43.562021  302538 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:43.562213  302538 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:43.562292  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.565776  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.566235  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.566270  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.566486  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.566706  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.566895  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.567043  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.567251  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.567439  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.567454  302538 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:43.797110  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:43.797142  302538 machine.go:96] duration metric: took 942.364782ms to provisionDockerMachine
	I0920 19:05:43.797157  302538 start.go:293] postStartSetup for "no-preload-037711" (driver="kvm2")
	I0920 19:05:43.797171  302538 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:43.797193  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:43.797516  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:43.797546  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.800148  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.800532  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.800559  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.800794  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.800993  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.801158  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.801255  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:43.885788  302538 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:43.890070  302538 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:43.890108  302538 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:43.890198  302538 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:43.890293  302538 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:43.890405  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:43.899679  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:43.924928  302538 start.go:296] duration metric: took 127.752462ms for postStartSetup
	I0920 19:05:43.924973  302538 fix.go:56] duration metric: took 20.133755115s for fixHost
	I0920 19:05:43.924996  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.927678  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.928059  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.928099  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.928277  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.928517  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.928685  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.928815  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.928979  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.929190  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.929204  302538 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:44.042745  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859144.016675004
	
	I0920 19:05:44.042769  302538 fix.go:216] guest clock: 1726859144.016675004
	I0920 19:05:44.042776  302538 fix.go:229] Guest: 2024-09-20 19:05:44.016675004 +0000 UTC Remote: 2024-09-20 19:05:43.924977449 +0000 UTC m=+357.534412233 (delta=91.697555ms)
	I0920 19:05:44.042804  302538 fix.go:200] guest clock delta is within tolerance: 91.697555ms
	I0920 19:05:44.042819  302538 start.go:83] releasing machines lock for "no-preload-037711", held for 20.251627041s
	I0920 19:05:44.042842  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.043134  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:44.046071  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.046412  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.046440  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.046613  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047113  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047278  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047366  302538 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:44.047428  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:44.047520  302538 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:44.047548  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:44.050275  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050358  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050849  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.050872  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.050892  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050915  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.051095  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:44.051259  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:44.051259  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:44.051496  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:44.051637  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:44.051655  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:44.051789  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:44.051953  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:44.134420  302538 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:44.175303  302538 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:44.319129  302538 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:44.325894  302538 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:44.325975  302538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:44.341779  302538 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:44.341809  302538 start.go:495] detecting cgroup driver to use...
	I0920 19:05:44.341899  302538 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:44.358211  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:44.373240  302538 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:44.373327  302538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:44.387429  302538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:44.401684  302538 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:44.521292  302538 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:44.668050  302538 docker.go:233] disabling docker service ...
	I0920 19:05:44.668124  302538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:44.683196  302538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:44.696604  302538 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:44.843581  302538 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:44.959377  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:44.973472  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:44.991282  302538 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:05:44.991344  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.001696  302538 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:45.001776  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.012684  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.023288  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.034330  302538 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:45.045773  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.056332  302538 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.074730  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.085656  302538 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:45.096371  302538 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:45.096447  302538 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:45.112094  302538 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:45.123050  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:45.236136  302538 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:45.325978  302538 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:45.326065  302538 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:45.330452  302538 start.go:563] Will wait 60s for crictl version
	I0920 19:05:45.330527  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.334010  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:45.373622  302538 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:45.373736  302538 ssh_runner.go:195] Run: crio --version
	I0920 19:05:45.401279  302538 ssh_runner.go:195] Run: crio --version
	I0920 19:05:45.430445  302538 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:05:45.431717  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:45.434768  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:45.435094  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:45.435121  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:45.435335  302538 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:45.439275  302538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:45.451300  302538 kubeadm.go:883] updating cluster {Name:no-preload-037711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:45.451461  302538 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:05:45.451502  302538 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:45.485045  302538 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:05:45.485073  302538 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 19:05:45.485130  302538 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:45.485150  302538 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.485168  302538 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.485182  302538 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.485231  302538 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.485171  302538 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.485305  302538 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 19:05:45.485450  302538 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.486694  302538 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.486700  302538 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.486808  302538 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.486808  302538 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 19:05:45.486829  302538 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.486894  302538 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:45.486829  302538 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.487055  302538 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.708911  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 19:05:45.773014  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.815176  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.818274  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.818298  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.829644  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.850791  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.862553  302538 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 19:05:45.862616  302538 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.862680  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.907516  302538 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 19:05:45.907573  302538 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.907629  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.938640  302538 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 19:05:45.938715  302538 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.938755  302538 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 19:05:45.938799  302538 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.938845  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.938770  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.947658  302538 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 19:05:45.947706  302538 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.947757  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.965105  302538 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 19:05:45.965161  302538 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.965166  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.965191  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.965248  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.965282  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.965344  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.965350  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.044513  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.044640  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:46.077894  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:46.080113  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:46.080170  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:46.080239  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.155137  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.155188  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:46.208431  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:46.208477  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:46.208521  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.208565  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:46.290657  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.290694  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 19:05:46.290794  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.325206  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 19:05:46.325353  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:46.353181  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 19:05:46.353289  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 19:05:46.353307  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 19:05:46.353312  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:46.353331  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 19:05:46.353383  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:46.353418  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:46.353384  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.353512  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.379873  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 19:05:46.379934  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 19:05:46.379979  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 19:05:46.380024  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 19:05:46.379981  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 19:05:46.380134  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:43.105005  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:45.105781  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:47.604822  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:44.235886  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:44.736349  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.235783  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.736619  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:46.236082  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:46.736609  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:47.236078  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:47.736130  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:48.236218  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:48.735858  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.784555  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:47.785125  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:46.622278  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:48.339532  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.985991382s)
	I0920 19:05:48.339568  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 19:05:48.339594  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:48.339653  302538 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.959488823s)
	I0920 19:05:48.339685  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 19:05:48.339665  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:48.339742  302538 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.717432253s)
	I0920 19:05:48.339787  302538 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 19:05:48.339815  302538 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:48.339842  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:48.343725  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:50.823508  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.483779728s)
	I0920 19:05:50.823559  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.479795238s)
	I0920 19:05:50.823593  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 19:05:50.823637  302538 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:50.823649  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:50.823692  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:49.607326  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:51.609055  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:49.236645  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:49.736183  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.236642  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.735713  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:51.235862  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:51.736479  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:52.235726  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:52.735939  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:53.235759  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:53.736290  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.284090  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:52.284996  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:54.127303  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.303601736s)
	I0920 19:05:54.127415  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:54.127327  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.303608969s)
	I0920 19:05:54.127455  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 19:05:54.127488  302538 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:54.127530  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:56.202021  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.074563861s)
	I0920 19:05:56.202050  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.074501802s)
	I0920 19:05:56.202076  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 19:05:56.202095  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 19:05:56.202118  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:56.202184  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:56.202202  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:05:56.207141  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 19:05:54.104909  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:56.105373  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:54.235840  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:54.735817  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:55.235812  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:55.736410  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:56.236203  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:56.735713  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:57.235777  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:57.735835  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:58.236448  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:58.736010  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:54.783661  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:56.784770  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:58.785122  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:58.166303  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.964088667s)
	I0920 19:05:58.166340  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 19:05:58.166369  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:58.166424  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:59.625258  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.458808535s)
	I0920 19:05:59.625294  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 19:05:59.625318  302538 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:05:59.625361  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:06:00.572722  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 19:06:00.572768  302538 cache_images.go:123] Successfully loaded all cached images
	I0920 19:06:00.572774  302538 cache_images.go:92] duration metric: took 15.087689513s to LoadCachedImages
	I0920 19:06:00.572788  302538 kubeadm.go:934] updating node { 192.168.61.136 8443 v1.31.1 crio true true} ...
	I0920 19:06:00.572917  302538 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-037711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:06:00.572994  302538 ssh_runner.go:195] Run: crio config
	I0920 19:06:00.619832  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:06:00.619861  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:06:00.619875  302538 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:06:00.619910  302538 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-037711 NodeName:no-preload-037711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:06:00.620110  302538 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-037711"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:06:00.620181  302538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:06:00.630434  302538 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:06:00.630513  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:06:00.639447  302538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 19:06:00.656195  302538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:06:00.675718  302538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0920 19:06:00.709191  302538 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0920 19:06:00.713271  302538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:06:00.726826  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:06:00.850927  302538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:06:00.869014  302538 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711 for IP: 192.168.61.136
	I0920 19:06:00.869044  302538 certs.go:194] generating shared ca certs ...
	I0920 19:06:00.869109  302538 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:06:00.869331  302538 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:06:00.869393  302538 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:06:00.869405  302538 certs.go:256] generating profile certs ...
	I0920 19:06:00.869507  302538 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/client.key
	I0920 19:06:00.869589  302538 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.key.b5da98fb
	I0920 19:06:00.869654  302538 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.key
	I0920 19:06:00.869831  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:06:00.869877  302538 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:06:00.869890  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:06:00.869947  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:06:00.869981  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:06:00.870010  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:06:00.870068  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:06:00.870802  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:06:00.922699  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:06:00.953401  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:06:00.996889  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:06:01.024682  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 19:06:01.050412  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:06:01.081212  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:06:01.108337  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:06:01.133628  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:06:01.158805  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:06:01.186888  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:06:01.211771  302538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:06:01.229448  302538 ssh_runner.go:195] Run: openssl version
	I0920 19:06:01.235289  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:06:01.246775  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.251410  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.251472  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.257271  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:06:01.268229  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:06:01.280431  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.285643  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.285736  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.291772  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:06:01.302858  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:06:01.314034  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.319160  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.319235  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.325450  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:06:01.336803  302538 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:06:01.341439  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:06:01.347592  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:06:01.354109  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:06:01.360513  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:06:01.366749  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:06:01.372898  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:06:01.379101  302538 kubeadm.go:392] StartCluster: {Name:no-preload-037711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:06:01.379228  302538 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:06:01.379280  302538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:06:01.416896  302538 cri.go:89] found id: ""
	I0920 19:06:01.416972  302538 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:58.606203  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:00.606802  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:59.236283  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:59.736440  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:00.236142  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:00.735772  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.236360  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.736388  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:02.236462  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:02.736742  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.236457  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.736705  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.284596  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:03.784495  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:01.428611  302538 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:06:01.428636  302538 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:06:01.428685  302538 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:06:01.439392  302538 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:06:01.440512  302538 kubeconfig.go:125] found "no-preload-037711" server: "https://192.168.61.136:8443"
	I0920 19:06:01.442938  302538 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:06:01.452938  302538 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.136
	I0920 19:06:01.452982  302538 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:06:01.452999  302538 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:06:01.453062  302538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:06:01.487878  302538 cri.go:89] found id: ""
	I0920 19:06:01.487967  302538 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:06:01.506032  302538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:06:01.516536  302538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:06:01.516562  302538 kubeadm.go:157] found existing configuration files:
	
	I0920 19:06:01.516609  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:06:01.526718  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:06:01.526790  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:06:01.536809  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:06:01.546172  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:06:01.546243  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:06:01.556211  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:06:01.565796  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:06:01.565869  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:06:01.577089  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:06:01.587862  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:06:01.587985  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:06:01.598666  302538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:06:01.610018  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:01.740046  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.566817  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.784258  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.848752  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.933469  302538 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:06:02.933579  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.434385  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.933975  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.962422  302538 api_server.go:72] duration metric: took 1.028951755s to wait for apiserver process to appear ...
	I0920 19:06:03.962453  302538 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:06:03.962485  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:03.963119  302538 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": dial tcp 192.168.61.136:8443: connect: connection refused
	I0920 19:06:04.462843  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.443140  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:06:06.443178  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:06:06.443196  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.485554  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:06:06.485597  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:06:06.485614  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.566023  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:06.566068  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:06.963116  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.972764  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:06.972804  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:07.463432  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:07.470963  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:07.471000  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:07.962553  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:07.967724  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0920 19:06:07.975215  302538 api_server.go:141] control plane version: v1.31.1
	I0920 19:06:07.975248  302538 api_server.go:131] duration metric: took 4.01278814s to wait for apiserver health ...
	I0920 19:06:07.975258  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:06:07.975267  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:06:07.977455  302538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:06:03.106079  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:05.609475  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:04.236005  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:04.735854  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:05.236716  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:05.736668  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.235839  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.736412  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:07.236224  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:07.735830  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:08.235800  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:08.736645  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.284930  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:08.784854  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:07.979099  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:06:07.991210  302538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:06:08.016110  302538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:06:08.031124  302538 system_pods.go:59] 8 kube-system pods found
	I0920 19:06:08.031177  302538 system_pods.go:61] "coredns-7c65d6cfc9-8gmsq" [91d89ad2-f899-464c-b351-a0773c16223b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:06:08.031191  302538 system_pods.go:61] "etcd-no-preload-037711" [5b353ad3-0389-4e3d-b5c3-2f2bc65db200] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:06:08.031203  302538 system_pods.go:61] "kube-apiserver-no-preload-037711" [b19002c7-f891-4bc1-a2f0-0f6beebb3987] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:06:08.031247  302538 system_pods.go:61] "kube-controller-manager-no-preload-037711" [a5b1951d-7189-4ee3-bc28-bed058048ebb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:06:08.031262  302538 system_pods.go:61] "kube-proxy-zzmkv" [c8f4695b-eefd-407a-9b7c-d5078632d120] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:06:08.031270  302538 system_pods.go:61] "kube-scheduler-no-preload-037711" [b44824ba-52ad-4e86-9408-118f0e1852d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:06:08.031280  302538 system_pods.go:61] "metrics-server-6867b74b74-7xpgm" [f6280d56-5be4-475f-91da-2862e992868f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:06:08.031290  302538 system_pods.go:61] "storage-provisioner" [d1efb64f-d2a9-4bb4-9bc3-c643c415fcf2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:06:08.031300  302538 system_pods.go:74] duration metric: took 15.160935ms to wait for pod list to return data ...
	I0920 19:06:08.031310  302538 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:06:08.035903  302538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:06:08.035953  302538 node_conditions.go:123] node cpu capacity is 2
	I0920 19:06:08.035968  302538 node_conditions.go:105] duration metric: took 4.652846ms to run NodePressure ...
	I0920 19:06:08.035995  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:08.404721  302538 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:06:08.409400  302538 kubeadm.go:739] kubelet initialised
	I0920 19:06:08.409423  302538 kubeadm.go:740] duration metric: took 4.670172ms waiting for restarted kubelet to initialise ...
	I0920 19:06:08.409432  302538 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:06:08.416547  302538 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:10.426817  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:08.107050  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:10.606744  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:09.236127  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:09.735809  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.236585  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.735863  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:11.236700  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:11.736557  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:12.236483  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:12.735695  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:13.235905  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:13.736128  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.785471  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:13.284642  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:12.923811  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.423162  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.926280  302538 pod_ready.go:93] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:15.926318  302538 pod_ready.go:82] duration metric: took 7.509740963s for pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.926332  302538 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.932683  302538 pod_ready.go:93] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:15.932713  302538 pod_ready.go:82] duration metric: took 6.372388ms for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.932725  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:13.111190  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.606371  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:14.236234  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:14.736677  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.236499  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.735667  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:16.235774  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:16.735833  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:17.236149  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:17.735782  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:18.236400  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:18.736460  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.784441  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:18.284748  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:17.938853  302538 pod_ready.go:103] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:19.939569  302538 pod_ready.go:103] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:18.104867  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:20.105870  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:22.605773  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:19.236298  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:19.736672  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.236401  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.735810  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:21.235673  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:21.736112  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:22.235998  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:22.736179  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:23.236680  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:23.736388  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.783320  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:22.783590  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:21.939753  302538 pod_ready.go:93] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:21.939781  302538 pod_ready.go:82] duration metric: took 6.007035191s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:21.939794  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.446396  302538 pod_ready.go:93] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.446425  302538 pod_ready.go:82] duration metric: took 506.622064ms for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.446435  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zzmkv" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.452105  302538 pod_ready.go:93] pod "kube-proxy-zzmkv" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.452130  302538 pod_ready.go:82] duration metric: took 5.688419ms for pod "kube-proxy-zzmkv" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.452139  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.456181  302538 pod_ready.go:93] pod "kube-scheduler-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.456205  302538 pod_ready.go:82] duration metric: took 4.05917ms for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.456215  302538 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:24.463262  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:24.606021  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:27.105497  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:24.236369  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:24.736082  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:25.236694  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:25.736346  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.236075  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.736666  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:27.236418  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:27.736656  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:28.235972  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:28.735743  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:24.783673  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:26.783960  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.283970  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:26.962413  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.462423  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.606628  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:32.105603  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.236688  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:29.736132  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:30.236404  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:30.735733  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.236364  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.736031  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:32.236457  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:32.735751  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:33.236371  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:33.736474  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.284572  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:33.286630  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:31.464686  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:33.962309  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:35.963445  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:34.105897  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:36.605140  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:34.236387  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:34.236472  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:34.276702  303486 cri.go:89] found id: ""
	I0920 19:06:34.276735  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.276747  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:34.276758  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:34.276815  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:34.312886  303486 cri.go:89] found id: ""
	I0920 19:06:34.312923  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.312935  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:34.312950  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:34.313024  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:34.347199  303486 cri.go:89] found id: ""
	I0920 19:06:34.347240  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.347250  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:34.347258  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:34.347332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:34.383077  303486 cri.go:89] found id: ""
	I0920 19:06:34.383110  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.383121  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:34.383130  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:34.383202  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:34.421184  303486 cri.go:89] found id: ""
	I0920 19:06:34.421212  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.421222  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:34.421231  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:34.421304  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:34.459964  303486 cri.go:89] found id: ""
	I0920 19:06:34.459998  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.460009  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:34.460018  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:34.460085  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:34.493761  303486 cri.go:89] found id: ""
	I0920 19:06:34.493803  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.493815  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:34.493824  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:34.493894  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:34.534406  303486 cri.go:89] found id: ""
	I0920 19:06:34.534445  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.534457  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:34.534471  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:34.534496  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:34.607256  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:34.607297  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:34.644923  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:34.644953  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:34.693574  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:34.693622  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:34.707703  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:34.707742  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:34.846809  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:37.347895  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:37.377651  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:37.377728  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:37.430034  303486 cri.go:89] found id: ""
	I0920 19:06:37.430071  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.430079  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:37.430087  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:37.430156  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:37.467026  303486 cri.go:89] found id: ""
	I0920 19:06:37.467055  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.467063  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:37.467069  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:37.467120  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:37.505791  303486 cri.go:89] found id: ""
	I0920 19:06:37.505824  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.505835  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:37.505845  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:37.505943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:37.541519  303486 cri.go:89] found id: ""
	I0920 19:06:37.541556  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.541568  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:37.541577  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:37.541633  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:37.576088  303486 cri.go:89] found id: ""
	I0920 19:06:37.576126  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.576137  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:37.576146  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:37.576204  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:37.613039  303486 cri.go:89] found id: ""
	I0920 19:06:37.613074  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.613084  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:37.613091  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:37.613153  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:37.656440  303486 cri.go:89] found id: ""
	I0920 19:06:37.656473  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.656482  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:37.656489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:37.656555  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:37.693247  303486 cri.go:89] found id: ""
	I0920 19:06:37.693283  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.693292  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:37.693302  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:37.693321  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:37.769230  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:37.769280  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:37.811016  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:37.811058  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:37.865729  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:37.865773  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:37.880056  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:37.880094  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:37.956402  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:35.783789  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:37.787063  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:38.461824  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.465028  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:38.605494  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.605606  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.457303  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:40.473769  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:40.473848  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:40.511320  303486 cri.go:89] found id: ""
	I0920 19:06:40.511354  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.511363  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:40.511371  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:40.511433  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:40.547086  303486 cri.go:89] found id: ""
	I0920 19:06:40.547127  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.547138  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:40.547147  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:40.547216  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:40.580969  303486 cri.go:89] found id: ""
	I0920 19:06:40.581010  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.581022  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:40.581035  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:40.581098  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:40.615802  303486 cri.go:89] found id: ""
	I0920 19:06:40.615842  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.615851  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:40.615858  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:40.615931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:40.649398  303486 cri.go:89] found id: ""
	I0920 19:06:40.649444  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.649459  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:40.649467  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:40.649541  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:40.683124  303486 cri.go:89] found id: ""
	I0920 19:06:40.683160  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.683172  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:40.683181  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:40.683251  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:40.718005  303486 cri.go:89] found id: ""
	I0920 19:06:40.718032  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.718040  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:40.718047  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:40.718107  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:40.751965  303486 cri.go:89] found id: ""
	I0920 19:06:40.751992  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.752000  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:40.752010  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:40.752024  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:40.765195  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:40.765234  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:40.842287  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:40.842321  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:40.842338  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:40.928384  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:40.928430  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:40.970207  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:40.970242  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:43.526435  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:43.540582  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:43.540680  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:43.576798  303486 cri.go:89] found id: ""
	I0920 19:06:43.576837  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.576846  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:43.576852  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:43.576916  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:43.615261  303486 cri.go:89] found id: ""
	I0920 19:06:43.615290  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.615298  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:43.615305  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:43.615359  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:43.651214  303486 cri.go:89] found id: ""
	I0920 19:06:43.651251  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.651264  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:43.651277  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:43.651338  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:43.684483  303486 cri.go:89] found id: ""
	I0920 19:06:43.684523  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.684535  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:43.684544  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:43.684614  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:43.720996  303486 cri.go:89] found id: ""
	I0920 19:06:43.721026  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.721035  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:43.721041  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:43.721107  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:43.764445  303486 cri.go:89] found id: ""
	I0920 19:06:43.764482  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.764493  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:43.764501  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:43.764564  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:43.808848  303486 cri.go:89] found id: ""
	I0920 19:06:43.808878  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.808888  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:43.808897  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:43.808968  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:43.845462  303486 cri.go:89] found id: ""
	I0920 19:06:43.845491  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.845500  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:43.845511  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:43.845525  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:43.896550  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:43.896596  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:43.909243  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:43.909272  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:06:40.284735  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:42.783363  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:42.962289  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:44.963071  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:43.106353  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:45.606296  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	W0920 19:06:43.987455  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:43.987474  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:43.987491  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:44.063585  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:44.063629  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:46.602859  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:46.617286  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:46.617357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:46.653643  303486 cri.go:89] found id: ""
	I0920 19:06:46.653681  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.653693  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:46.653702  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:46.653778  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:46.691169  303486 cri.go:89] found id: ""
	I0920 19:06:46.691198  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.691206  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:46.691213  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:46.691271  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:46.725498  303486 cri.go:89] found id: ""
	I0920 19:06:46.725527  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.725538  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:46.725545  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:46.725614  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:46.758850  303486 cri.go:89] found id: ""
	I0920 19:06:46.758876  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.758884  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:46.758891  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:46.758942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:46.793648  303486 cri.go:89] found id: ""
	I0920 19:06:46.793683  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.793692  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:46.793699  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:46.793755  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:46.832908  303486 cri.go:89] found id: ""
	I0920 19:06:46.832940  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.832947  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:46.832953  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:46.833019  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:46.866450  303486 cri.go:89] found id: ""
	I0920 19:06:46.866502  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.866513  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:46.866522  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:46.866593  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:46.901966  303486 cri.go:89] found id: ""
	I0920 19:06:46.902001  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.902013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:46.902026  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:46.902041  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:46.948901  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:46.948946  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:46.963489  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:46.963534  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:47.041701  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:47.041722  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:47.041736  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:47.124320  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:47.124364  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:44.783818  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:46.784000  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:48.785175  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:46.963700  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:49.462018  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:48.104361  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:50.105520  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:52.605799  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:49.664255  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:49.677240  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:49.677322  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:49.712375  303486 cri.go:89] found id: ""
	I0920 19:06:49.712401  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.712409  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:49.712415  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:49.712476  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:49.747682  303486 cri.go:89] found id: ""
	I0920 19:06:49.747713  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.747721  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:49.747727  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:49.747783  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:49.782276  303486 cri.go:89] found id: ""
	I0920 19:06:49.782319  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.782329  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:49.782337  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:49.782400  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:49.822625  303486 cri.go:89] found id: ""
	I0920 19:06:49.822661  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.822672  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:49.822680  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:49.822751  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:49.862159  303486 cri.go:89] found id: ""
	I0920 19:06:49.862192  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.862202  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:49.862212  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:49.862281  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:49.897552  303486 cri.go:89] found id: ""
	I0920 19:06:49.897587  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.897595  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:49.897608  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:49.897667  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:49.931667  303486 cri.go:89] found id: ""
	I0920 19:06:49.931698  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.931709  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:49.931718  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:49.931774  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:49.969206  303486 cri.go:89] found id: ""
	I0920 19:06:49.969236  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.969244  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:49.969254  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:49.969266  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:50.019287  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:50.019328  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:50.033080  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:50.033113  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:50.106415  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:50.106442  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:50.106459  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:50.183710  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:50.183762  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:52.725443  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:52.739293  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:52.739386  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:52.772412  303486 cri.go:89] found id: ""
	I0920 19:06:52.772445  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.772454  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:52.772461  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:52.772528  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:52.811153  303486 cri.go:89] found id: ""
	I0920 19:06:52.811189  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.811197  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:52.811204  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:52.811260  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:52.848709  303486 cri.go:89] found id: ""
	I0920 19:06:52.848740  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.848749  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:52.848755  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:52.848811  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:52.883358  303486 cri.go:89] found id: ""
	I0920 19:06:52.883387  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.883394  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:52.883400  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:52.883455  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:52.917838  303486 cri.go:89] found id: ""
	I0920 19:06:52.917874  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.917893  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:52.917912  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:52.917982  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:52.952340  303486 cri.go:89] found id: ""
	I0920 19:06:52.952378  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.952387  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:52.952396  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:52.952471  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:52.986433  303486 cri.go:89] found id: ""
	I0920 19:06:52.986469  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.986478  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:52.986486  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:52.986582  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:53.024209  303486 cri.go:89] found id: ""
	I0920 19:06:53.024241  303486 logs.go:276] 0 containers: []
	W0920 19:06:53.024249  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:53.024260  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:53.024272  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:53.075336  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:53.075374  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:53.090761  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:53.090802  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:53.167883  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:53.167915  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:53.167933  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:53.242003  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:53.242044  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:50.785624  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:53.284212  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:51.462197  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:53.962545  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:55.962875  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:54.607806  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:57.105146  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:55.779107  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:55.793713  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:55.793802  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:55.829411  303486 cri.go:89] found id: ""
	I0920 19:06:55.829441  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.829450  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:55.829456  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:55.829513  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:55.864578  303486 cri.go:89] found id: ""
	I0920 19:06:55.864606  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.864617  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:55.864625  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:55.864686  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:55.897004  303486 cri.go:89] found id: ""
	I0920 19:06:55.897033  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.897041  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:55.897048  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:55.897106  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:55.931019  303486 cri.go:89] found id: ""
	I0920 19:06:55.931055  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.931066  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:55.931076  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:55.931141  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:55.966595  303486 cri.go:89] found id: ""
	I0920 19:06:55.966625  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.966635  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:55.966643  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:55.966693  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:55.999707  303486 cri.go:89] found id: ""
	I0920 19:06:55.999736  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.999747  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:55.999756  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:55.999825  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:56.034323  303486 cri.go:89] found id: ""
	I0920 19:06:56.034361  303486 logs.go:276] 0 containers: []
	W0920 19:06:56.034371  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:56.034377  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:56.034433  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:56.069019  303486 cri.go:89] found id: ""
	I0920 19:06:56.069048  303486 logs.go:276] 0 containers: []
	W0920 19:06:56.069056  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:56.069066  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:56.069077  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:56.122820  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:56.122860  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:56.136924  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:56.136966  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:56.216255  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:56.216284  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:56.216299  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:56.293461  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:56.293506  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:58.831252  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:58.844410  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:58.844474  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:58.877508  303486 cri.go:89] found id: ""
	I0920 19:06:58.877539  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.877547  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:58.877555  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:58.877613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:58.911284  303486 cri.go:89] found id: ""
	I0920 19:06:58.911315  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.911323  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:58.911329  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:58.911382  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:58.944646  303486 cri.go:89] found id: ""
	I0920 19:06:58.944675  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.944682  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:58.944688  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:58.944739  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:55.784379  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.283450  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.461839  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:00.461977  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:59.108066  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:01.605247  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.979752  303486 cri.go:89] found id: ""
	I0920 19:06:58.979787  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.979798  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:58.979807  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:58.979864  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:59.016613  303486 cri.go:89] found id: ""
	I0920 19:06:59.016649  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.016661  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:59.016670  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:59.016735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:59.052012  303486 cri.go:89] found id: ""
	I0920 19:06:59.052039  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.052047  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:59.052054  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:59.052106  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:59.090102  303486 cri.go:89] found id: ""
	I0920 19:06:59.090140  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.090152  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:59.090159  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:59.090213  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:59.128028  303486 cri.go:89] found id: ""
	I0920 19:06:59.128057  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.128068  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:59.128080  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:59.128096  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:59.142966  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:59.143012  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:59.227311  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:59.227336  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:59.227357  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:59.308319  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:59.308366  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:59.347299  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:59.347336  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:01.897644  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:01.912876  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:01.912951  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:01.956550  303486 cri.go:89] found id: ""
	I0920 19:07:01.956679  303486 logs.go:276] 0 containers: []
	W0920 19:07:01.956690  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:01.956700  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:01.956765  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:01.995391  303486 cri.go:89] found id: ""
	I0920 19:07:01.995425  303486 logs.go:276] 0 containers: []
	W0920 19:07:01.995433  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:01.995440  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:01.995501  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:02.031149  303486 cri.go:89] found id: ""
	I0920 19:07:02.031181  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.031193  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:02.031202  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:02.031273  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:02.065856  303486 cri.go:89] found id: ""
	I0920 19:07:02.065885  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.065894  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:02.065924  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:02.065981  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:02.101974  303486 cri.go:89] found id: ""
	I0920 19:07:02.102018  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.102032  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:02.102041  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:02.102115  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:02.138108  303486 cri.go:89] found id: ""
	I0920 19:07:02.138142  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.138151  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:02.138156  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:02.138217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:02.170136  303486 cri.go:89] found id: ""
	I0920 19:07:02.170165  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.170173  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:02.170179  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:02.170244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:02.203944  303486 cri.go:89] found id: ""
	I0920 19:07:02.203969  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.203978  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:02.203991  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:02.204008  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:02.256635  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:02.256679  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:02.270266  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:02.270303  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:02.341145  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:02.341182  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:02.341199  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:02.415133  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:02.415175  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:00.283726  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:02.285304  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:02.462310  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:04.462872  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:03.605300  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:06.105872  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:04.952448  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:04.966632  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:04.966702  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:05.001098  303486 cri.go:89] found id: ""
	I0920 19:07:05.001131  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.001141  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:05.001149  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:05.001217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:05.038160  303486 cri.go:89] found id: ""
	I0920 19:07:05.038186  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.038196  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:05.038202  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:05.038260  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:05.083301  303486 cri.go:89] found id: ""
	I0920 19:07:05.083346  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.083357  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:05.083365  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:05.083436  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:05.118916  303486 cri.go:89] found id: ""
	I0920 19:07:05.118952  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.118964  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:05.118972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:05.119065  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:05.157452  303486 cri.go:89] found id: ""
	I0920 19:07:05.157485  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.157496  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:05.157511  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:05.157587  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:05.197100  303486 cri.go:89] found id: ""
	I0920 19:07:05.197133  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.197143  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:05.197152  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:05.197225  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:05.231286  303486 cri.go:89] found id: ""
	I0920 19:07:05.231317  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.231328  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:05.231336  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:05.231409  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:05.269798  303486 cri.go:89] found id: ""
	I0920 19:07:05.269835  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.269847  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:05.269862  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:05.269882  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:05.310029  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:05.310068  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:05.360493  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:05.360537  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:05.373771  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:05.373815  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:05.449860  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:05.449886  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:05.449924  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:08.034520  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:08.049970  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:08.050040  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:08.084683  303486 cri.go:89] found id: ""
	I0920 19:07:08.084714  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.084724  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:08.084731  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:08.084799  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:08.121150  303486 cri.go:89] found id: ""
	I0920 19:07:08.121176  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.121183  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:08.121190  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:08.121244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:08.157830  303486 cri.go:89] found id: ""
	I0920 19:07:08.157865  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.157877  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:08.157891  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:08.157967  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:08.191040  303486 cri.go:89] found id: ""
	I0920 19:07:08.191082  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.191094  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:08.191102  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:08.191169  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:08.230194  303486 cri.go:89] found id: ""
	I0920 19:07:08.230230  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.230239  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:08.230246  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:08.230304  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:08.268526  303486 cri.go:89] found id: ""
	I0920 19:07:08.268558  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.268566  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:08.268573  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:08.268631  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:08.302383  303486 cri.go:89] found id: ""
	I0920 19:07:08.302411  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.302420  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:08.302428  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:08.302492  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:08.336435  303486 cri.go:89] found id: ""
	I0920 19:07:08.336469  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.336479  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:08.336491  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:08.336505  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:08.418086  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:08.418129  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:08.458355  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:08.458391  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:08.507017  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:08.507062  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:08.522701  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:08.522737  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:08.592777  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:04.784475  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:07.283612  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:09.286218  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:06.963106  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:09.462861  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:08.108458  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:10.605447  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:12.605992  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:11.093689  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:11.107438  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:11.107503  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:11.139701  303486 cri.go:89] found id: ""
	I0920 19:07:11.139742  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.139755  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:11.139765  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:11.139822  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:11.196143  303486 cri.go:89] found id: ""
	I0920 19:07:11.196182  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.196191  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:11.196197  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:11.196268  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:11.232121  303486 cri.go:89] found id: ""
	I0920 19:07:11.232156  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.232164  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:11.232171  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:11.232238  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:11.267307  303486 cri.go:89] found id: ""
	I0920 19:07:11.267338  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.267349  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:11.267358  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:11.267423  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:11.306583  303486 cri.go:89] found id: ""
	I0920 19:07:11.306614  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.306623  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:11.306631  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:11.306698  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:11.348162  303486 cri.go:89] found id: ""
	I0920 19:07:11.348188  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.348196  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:11.348203  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:11.348257  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:11.383612  303486 cri.go:89] found id: ""
	I0920 19:07:11.383649  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.383660  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:11.383669  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:11.383736  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:11.417538  303486 cri.go:89] found id: ""
	I0920 19:07:11.417575  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.417583  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:11.417593  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:11.417609  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:11.470242  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:11.470282  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:11.485448  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:11.485480  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:11.559466  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:11.559495  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:11.559513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:11.636080  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:11.636133  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:11.783461  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:13.783785  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:11.462940  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:13.963340  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:14.609611  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:17.105222  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:14.177278  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:14.190413  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:14.190483  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:14.224238  303486 cri.go:89] found id: ""
	I0920 19:07:14.224264  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.224272  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:14.224278  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:14.224330  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:14.265253  303486 cri.go:89] found id: ""
	I0920 19:07:14.265285  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.265297  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:14.265304  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:14.265357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:14.300591  303486 cri.go:89] found id: ""
	I0920 19:07:14.300619  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.300633  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:14.300639  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:14.300695  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:14.335638  303486 cri.go:89] found id: ""
	I0920 19:07:14.335669  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.335677  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:14.335683  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:14.335735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:14.369291  303486 cri.go:89] found id: ""
	I0920 19:07:14.369328  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.369336  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:14.369344  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:14.369397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:14.404913  303486 cri.go:89] found id: ""
	I0920 19:07:14.404947  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.404958  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:14.404967  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:14.405034  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:14.438793  303486 cri.go:89] found id: ""
	I0920 19:07:14.438834  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.438845  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:14.438856  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:14.438926  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:14.475268  303486 cri.go:89] found id: ""
	I0920 19:07:14.475297  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.475305  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:14.475321  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:14.475342  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:14.528066  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:14.528126  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:14.542850  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:14.542891  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:14.612772  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:14.612800  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:14.612819  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:14.694528  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:14.694579  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:17.234389  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:17.247479  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:17.247544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:17.285461  303486 cri.go:89] found id: ""
	I0920 19:07:17.285488  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.285496  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:17.285502  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:17.285553  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:17.320580  303486 cri.go:89] found id: ""
	I0920 19:07:17.320606  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.320614  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:17.320620  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:17.320677  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:17.356405  303486 cri.go:89] found id: ""
	I0920 19:07:17.356440  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.356462  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:17.356471  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:17.356526  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:17.391268  303486 cri.go:89] found id: ""
	I0920 19:07:17.391301  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.391309  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:17.391316  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:17.391381  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:17.429886  303486 cri.go:89] found id: ""
	I0920 19:07:17.429938  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.429950  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:17.429959  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:17.430022  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:17.466059  303486 cri.go:89] found id: ""
	I0920 19:07:17.466093  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.466104  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:17.466111  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:17.466176  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:17.501128  303486 cri.go:89] found id: ""
	I0920 19:07:17.501159  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.501168  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:17.501174  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:17.501247  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:17.536969  303486 cri.go:89] found id: ""
	I0920 19:07:17.536999  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.537007  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:17.537016  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:17.537031  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:17.592071  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:17.592119  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:17.609022  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:17.609057  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:17.696393  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:17.696420  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:17.696434  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:17.778077  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:17.778122  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:15.785002  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:18.284101  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:16.463809  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:18.964348  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:19.604758  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:21.608192  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:20.319211  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:20.332158  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:20.332235  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:20.366195  303486 cri.go:89] found id: ""
	I0920 19:07:20.366230  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.366241  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:20.366250  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:20.366313  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:20.401786  303486 cri.go:89] found id: ""
	I0920 19:07:20.401819  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.401829  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:20.401846  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:20.401943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:20.433684  303486 cri.go:89] found id: ""
	I0920 19:07:20.433711  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.433719  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:20.433725  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:20.433783  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:20.469495  303486 cri.go:89] found id: ""
	I0920 19:07:20.469524  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.469535  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:20.469543  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:20.469613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:20.502214  303486 cri.go:89] found id: ""
	I0920 19:07:20.502245  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.502256  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:20.502263  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:20.502329  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:20.535829  303486 cri.go:89] found id: ""
	I0920 19:07:20.535867  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.535879  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:20.535887  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:20.535952  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:20.569605  303486 cri.go:89] found id: ""
	I0920 19:07:20.569635  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.569643  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:20.569654  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:20.569726  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:20.603676  303486 cri.go:89] found id: ""
	I0920 19:07:20.603699  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.603706  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:20.603715  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:20.603726  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:20.656645  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:20.656692  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:20.671077  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:20.671107  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:20.740996  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:20.741028  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:20.741046  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:20.820541  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:20.820592  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:23.362973  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:23.380350  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:23.380432  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:23.423145  303486 cri.go:89] found id: ""
	I0920 19:07:23.423183  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.423193  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:23.423202  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:23.423272  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:23.459019  303486 cri.go:89] found id: ""
	I0920 19:07:23.459057  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.459068  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:23.459077  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:23.459144  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:23.502876  303486 cri.go:89] found id: ""
	I0920 19:07:23.502908  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.502920  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:23.502929  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:23.502994  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:23.538440  303486 cri.go:89] found id: ""
	I0920 19:07:23.538471  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.538481  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:23.538489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:23.538552  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:23.575164  303486 cri.go:89] found id: ""
	I0920 19:07:23.575199  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.575211  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:23.575220  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:23.575296  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:23.610449  303486 cri.go:89] found id: ""
	I0920 19:07:23.610480  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.610489  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:23.610495  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:23.610562  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:23.644164  303486 cri.go:89] found id: ""
	I0920 19:07:23.644195  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.644203  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:23.644209  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:23.644275  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:23.684379  303486 cri.go:89] found id: ""
	I0920 19:07:23.684417  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.684428  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:23.684442  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:23.684459  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:23.762838  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:23.762885  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:23.805616  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:23.805650  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:23.857080  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:23.857122  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:23.870602  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:23.870635  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:23.941187  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:20.284264  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:22.284388  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:24.285108  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:21.462493  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:23.467933  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:25.963071  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:24.106087  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:26.605442  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:26.441571  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:26.455091  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:26.455185  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:26.489658  303486 cri.go:89] found id: ""
	I0920 19:07:26.489696  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.489707  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:26.489716  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:26.489773  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:26.528829  303486 cri.go:89] found id: ""
	I0920 19:07:26.528865  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.528878  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:26.528886  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:26.528966  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:26.568402  303486 cri.go:89] found id: ""
	I0920 19:07:26.568429  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.568443  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:26.568450  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:26.568503  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:26.606654  303486 cri.go:89] found id: ""
	I0920 19:07:26.606683  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.606693  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:26.606701  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:26.606764  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:26.640825  303486 cri.go:89] found id: ""
	I0920 19:07:26.640856  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.640864  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:26.640871  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:26.640934  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:26.677023  303486 cri.go:89] found id: ""
	I0920 19:07:26.677054  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.677062  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:26.677068  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:26.677123  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:26.712921  303486 cri.go:89] found id: ""
	I0920 19:07:26.712956  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.712964  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:26.712971  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:26.713031  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:26.747750  303486 cri.go:89] found id: ""
	I0920 19:07:26.747778  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.747786  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:26.747796  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:26.747810  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:26.799240  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:26.799283  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:26.813197  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:26.813233  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:26.882751  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:26.882780  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:26.882799  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:26.965108  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:26.965146  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:26.784306  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:29.283573  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:28.461526  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:30.462242  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:28.606602  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:31.106657  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:29.503960  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:29.516601  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:29.516669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:29.555581  303486 cri.go:89] found id: ""
	I0920 19:07:29.555622  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.555632  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:29.555640  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:29.555711  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:29.593858  303486 cri.go:89] found id: ""
	I0920 19:07:29.593885  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.593923  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:29.593937  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:29.593990  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:29.629507  303486 cri.go:89] found id: ""
	I0920 19:07:29.629538  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.629548  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:29.629557  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:29.629616  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:29.662880  303486 cri.go:89] found id: ""
	I0920 19:07:29.662913  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.662921  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:29.662928  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:29.662976  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:29.695422  303486 cri.go:89] found id: ""
	I0920 19:07:29.695448  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.695458  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:29.695466  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:29.695531  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:29.730641  303486 cri.go:89] found id: ""
	I0920 19:07:29.730673  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.730685  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:29.730693  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:29.730756  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:29.764186  303486 cri.go:89] found id: ""
	I0920 19:07:29.764220  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.764229  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:29.764238  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:29.764302  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:29.804146  303486 cri.go:89] found id: ""
	I0920 19:07:29.804174  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.804182  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:29.804191  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:29.804204  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:29.885573  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:29.885633  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:29.924619  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:29.924667  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:29.978187  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:29.978230  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:29.992161  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:29.992190  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:30.069767  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:32.570197  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:32.583160  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:32.583244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:32.620842  303486 cri.go:89] found id: ""
	I0920 19:07:32.620870  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.620881  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:32.620899  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:32.620958  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:32.657169  303486 cri.go:89] found id: ""
	I0920 19:07:32.657205  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.657216  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:32.657225  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:32.657292  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:32.694773  303486 cri.go:89] found id: ""
	I0920 19:07:32.694802  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.694809  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:32.694815  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:32.694882  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:32.733318  303486 cri.go:89] found id: ""
	I0920 19:07:32.733350  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.733360  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:32.733370  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:32.733436  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:32.766019  303486 cri.go:89] found id: ""
	I0920 19:07:32.766052  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.766062  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:32.766070  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:32.766138  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:32.801412  303486 cri.go:89] found id: ""
	I0920 19:07:32.801443  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.801454  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:32.801463  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:32.801533  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:32.833743  303486 cri.go:89] found id: ""
	I0920 19:07:32.833771  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.833779  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:32.833787  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:32.833847  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:32.866775  303486 cri.go:89] found id: ""
	I0920 19:07:32.866803  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.866811  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:32.866821  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:32.866839  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:32.919257  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:32.919310  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:32.933554  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:32.933602  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:33.002657  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:33.002702  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:33.002721  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:33.081271  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:33.081316  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:31.284488  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:33.782998  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:32.462645  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:34.963285  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:33.609072  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:36.107460  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:35.627131  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:35.640958  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:35.641032  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:35.675943  303486 cri.go:89] found id: ""
	I0920 19:07:35.675976  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.675984  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:35.675991  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:35.676044  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:35.710075  303486 cri.go:89] found id: ""
	I0920 19:07:35.710104  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.710116  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:35.710124  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:35.710194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:35.747890  303486 cri.go:89] found id: ""
	I0920 19:07:35.747920  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.747931  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:35.747939  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:35.748004  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:35.786197  303486 cri.go:89] found id: ""
	I0920 19:07:35.786231  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.786242  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:35.786252  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:35.786314  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:35.819109  303486 cri.go:89] found id: ""
	I0920 19:07:35.819146  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.819158  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:35.819168  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:35.819244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:35.853244  303486 cri.go:89] found id: ""
	I0920 19:07:35.853282  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.853292  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:35.853301  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:35.853378  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:35.886864  303486 cri.go:89] found id: ""
	I0920 19:07:35.886897  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.886908  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:35.886917  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:35.886986  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:35.920872  303486 cri.go:89] found id: ""
	I0920 19:07:35.920906  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.920917  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:35.920939  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:35.920957  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:35.998741  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:35.998794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:36.040681  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:36.040720  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:36.095848  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:36.095909  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:36.110903  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:36.110939  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:36.186658  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:38.687762  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:38.701640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:38.701708  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:38.734908  303486 cri.go:89] found id: ""
	I0920 19:07:38.734946  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.734956  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:38.734966  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:38.735031  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:38.768062  303486 cri.go:89] found id: ""
	I0920 19:07:38.768100  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.768112  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:38.768120  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:38.768188  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:38.800881  303486 cri.go:89] found id: ""
	I0920 19:07:38.800915  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.800927  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:38.800936  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:38.801004  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:38.835119  303486 cri.go:89] found id: ""
	I0920 19:07:38.835148  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.835156  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:38.835164  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:38.835223  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:38.872677  303486 cri.go:89] found id: ""
	I0920 19:07:38.872712  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.872723  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:38.872733  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:38.872807  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:38.913921  303486 cri.go:89] found id: ""
	I0920 19:07:38.913955  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.913965  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:38.913972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:38.914029  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:35.783443  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.284549  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:36.963668  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.963893  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.608347  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:41.106313  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.951849  303486 cri.go:89] found id: ""
	I0920 19:07:38.951882  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.951893  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:38.951902  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:38.951972  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:38.988117  303486 cri.go:89] found id: ""
	I0920 19:07:38.988149  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.988161  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:38.988177  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:38.988191  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:39.028804  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:39.028843  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:39.083374  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:39.083427  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:39.097434  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:39.097463  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:39.172185  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:39.172213  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:39.172226  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:41.756648  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:41.772358  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:41.772432  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:41.809067  303486 cri.go:89] found id: ""
	I0920 19:07:41.809109  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.809123  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:41.809132  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:41.809191  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:41.853413  303486 cri.go:89] found id: ""
	I0920 19:07:41.853445  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.853457  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:41.853465  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:41.853524  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:41.891536  303486 cri.go:89] found id: ""
	I0920 19:07:41.891569  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.891580  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:41.891588  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:41.891668  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:41.931046  303486 cri.go:89] found id: ""
	I0920 19:07:41.931085  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.931093  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:41.931099  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:41.931155  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:41.968120  303486 cri.go:89] found id: ""
	I0920 19:07:41.968152  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.968164  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:41.968172  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:41.968240  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:42.002478  303486 cri.go:89] found id: ""
	I0920 19:07:42.002512  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.002523  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:42.002532  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:42.002599  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:42.038031  303486 cri.go:89] found id: ""
	I0920 19:07:42.038067  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.038080  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:42.038087  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:42.038150  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:42.072124  303486 cri.go:89] found id: ""
	I0920 19:07:42.072155  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.072166  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:42.072178  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:42.072195  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:42.128217  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:42.128259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:42.142291  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:42.142322  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:42.215278  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:42.215305  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:42.215324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:42.293431  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:42.293476  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:40.784191  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.283580  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:41.463429  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.963059  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.608790  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:46.105338  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:44.836094  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:44.850327  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:44.850397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:44.884595  303486 cri.go:89] found id: ""
	I0920 19:07:44.884624  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.884632  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:44.884639  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:44.884711  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:44.917727  303486 cri.go:89] found id: ""
	I0920 19:07:44.917754  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.917763  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:44.917769  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:44.917837  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:44.955821  303486 cri.go:89] found id: ""
	I0920 19:07:44.955860  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.955871  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:44.955879  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:44.955937  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:44.994543  303486 cri.go:89] found id: ""
	I0920 19:07:44.994579  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.994590  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:44.994598  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:44.994651  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:45.031839  303486 cri.go:89] found id: ""
	I0920 19:07:45.031877  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.031888  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:45.031896  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:45.031962  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:45.070554  303486 cri.go:89] found id: ""
	I0920 19:07:45.070588  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.070601  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:45.070609  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:45.070678  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:45.108727  303486 cri.go:89] found id: ""
	I0920 19:07:45.108760  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.108771  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:45.108779  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:45.108855  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:45.144045  303486 cri.go:89] found id: ""
	I0920 19:07:45.144075  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.144083  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:45.144094  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:45.144108  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:45.185800  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:45.185834  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:45.238364  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:45.238410  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:45.252111  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:45.252145  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:45.329009  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:45.329036  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:45.329051  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:47.912910  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:47.926378  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:47.926458  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:47.961067  303486 cri.go:89] found id: ""
	I0920 19:07:47.961094  303486 logs.go:276] 0 containers: []
	W0920 19:07:47.961103  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:47.961111  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:47.961172  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:48.006680  303486 cri.go:89] found id: ""
	I0920 19:07:48.006717  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.006729  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:48.006738  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:48.006805  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:48.042230  303486 cri.go:89] found id: ""
	I0920 19:07:48.042261  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.042272  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:48.042281  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:48.042349  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:48.080779  303486 cri.go:89] found id: ""
	I0920 19:07:48.080836  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.080850  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:48.080860  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:48.080931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:48.119439  303486 cri.go:89] found id: ""
	I0920 19:07:48.119469  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.119477  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:48.119483  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:48.119536  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:48.156219  303486 cri.go:89] found id: ""
	I0920 19:07:48.156258  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.156269  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:48.156279  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:48.156354  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:48.192112  303486 cri.go:89] found id: ""
	I0920 19:07:48.192151  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.192162  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:48.192170  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:48.192240  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:48.228916  303486 cri.go:89] found id: ""
	I0920 19:07:48.228958  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.228968  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:48.228981  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:48.229003  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:48.284073  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:48.284115  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:48.297677  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:48.297713  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:48.374834  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:48.374860  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:48.374876  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:48.455468  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:48.455512  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:45.284055  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:47.783744  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:46.461832  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:48.462980  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:50.463485  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:48.605035  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:51.105952  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:50.998354  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:51.012827  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:51.012904  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:51.046701  303486 cri.go:89] found id: ""
	I0920 19:07:51.046739  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.046750  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:51.046758  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:51.046827  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:51.083829  303486 cri.go:89] found id: ""
	I0920 19:07:51.083867  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.083878  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:51.083891  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:51.083965  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:51.124126  303486 cri.go:89] found id: ""
	I0920 19:07:51.124170  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.124180  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:51.124187  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:51.124254  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:51.159141  303486 cri.go:89] found id: ""
	I0920 19:07:51.159175  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.159184  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:51.159190  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:51.159253  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:51.192793  303486 cri.go:89] found id: ""
	I0920 19:07:51.192829  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.192840  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:51.192863  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:51.192938  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:51.225489  303486 cri.go:89] found id: ""
	I0920 19:07:51.225515  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.225524  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:51.225530  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:51.225582  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:51.258256  303486 cri.go:89] found id: ""
	I0920 19:07:51.258283  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.258294  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:51.258301  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:51.258363  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:51.292474  303486 cri.go:89] found id: ""
	I0920 19:07:51.292504  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.292512  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:51.292522  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:51.292537  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:51.331386  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:51.331422  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:51.385136  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:51.385182  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:51.400792  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:51.400828  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:51.492771  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:51.492795  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:51.492810  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:49.784132  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:52.284075  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:54.284870  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:52.963813  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:55.464095  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:53.607259  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:56.106592  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:54.074889  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:54.088453  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:54.088534  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:54.125096  303486 cri.go:89] found id: ""
	I0920 19:07:54.125138  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.125159  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:54.125166  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:54.125231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:54.159630  303486 cri.go:89] found id: ""
	I0920 19:07:54.159665  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.159676  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:54.159685  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:54.159759  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:54.195919  303486 cri.go:89] found id: ""
	I0920 19:07:54.195951  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.195965  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:54.195972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:54.196042  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:54.230294  303486 cri.go:89] found id: ""
	I0920 19:07:54.230323  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.230332  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:54.230339  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:54.230396  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:54.266764  303486 cri.go:89] found id: ""
	I0920 19:07:54.266793  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.266800  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:54.266807  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:54.266865  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:54.300704  303486 cri.go:89] found id: ""
	I0920 19:07:54.300731  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.300741  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:54.300750  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:54.300817  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:54.334447  303486 cri.go:89] found id: ""
	I0920 19:07:54.334473  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.334480  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:54.334487  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:54.334546  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:54.369814  303486 cri.go:89] found id: ""
	I0920 19:07:54.369858  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.369866  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:54.369878  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:54.369890  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:54.423088  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:54.423135  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:54.436770  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:54.436801  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:54.510731  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:54.510757  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:54.510773  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:54.593041  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:54.593091  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:57.134030  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:57.147605  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:57.147674  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:57.202662  303486 cri.go:89] found id: ""
	I0920 19:07:57.202690  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.202699  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:57.202705  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:57.202757  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:57.236448  303486 cri.go:89] found id: ""
	I0920 19:07:57.236476  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.236484  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:57.236493  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:57.236558  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:57.269450  303486 cri.go:89] found id: ""
	I0920 19:07:57.269478  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.269485  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:57.269491  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:57.269544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:57.305749  303486 cri.go:89] found id: ""
	I0920 19:07:57.305784  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.305795  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:57.305806  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:57.305877  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:57.339802  303486 cri.go:89] found id: ""
	I0920 19:07:57.339844  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.339857  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:57.339866  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:57.339942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:57.371929  303486 cri.go:89] found id: ""
	I0920 19:07:57.371962  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.371971  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:57.371980  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:57.372051  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:57.405749  303486 cri.go:89] found id: ""
	I0920 19:07:57.405789  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.405802  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:57.405812  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:57.405888  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:57.439259  303486 cri.go:89] found id: ""
	I0920 19:07:57.439291  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.439300  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:57.439310  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:57.439323  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:57.491405  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:57.491450  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:57.505992  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:57.506027  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:57.580598  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:57.580623  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:57.580638  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:57.659475  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:57.659513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:56.783867  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:58.783944  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:57.465789  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:59.963589  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:58.606492  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:01.105967  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:00.201478  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:00.217162  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:00.217228  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:00.252219  303486 cri.go:89] found id: ""
	I0920 19:08:00.252247  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.252256  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:00.252263  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:00.252334  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:00.287244  303486 cri.go:89] found id: ""
	I0920 19:08:00.287283  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.287295  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:00.287302  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:00.287367  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:00.325785  303486 cri.go:89] found id: ""
	I0920 19:08:00.325818  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.325829  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:00.325839  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:00.325931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:00.359718  303486 cri.go:89] found id: ""
	I0920 19:08:00.359747  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.359757  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:00.359766  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:00.359847  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:00.399105  303486 cri.go:89] found id: ""
	I0920 19:08:00.399147  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.399156  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:00.399163  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:00.399227  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:00.433647  303486 cri.go:89] found id: ""
	I0920 19:08:00.433675  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.433683  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:00.433692  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:00.433756  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:00.467771  303486 cri.go:89] found id: ""
	I0920 19:08:00.467820  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.467832  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:00.467841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:00.467911  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:00.511320  303486 cri.go:89] found id: ""
	I0920 19:08:00.511363  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.511376  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:00.511392  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:00.511414  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:00.594669  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:00.594703  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:00.594723  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:00.672747  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:00.672800  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:00.710001  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:00.710049  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:00.760333  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:00.760378  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:03.274393  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:03.289260  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:03.289352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:03.327884  303486 cri.go:89] found id: ""
	I0920 19:08:03.327919  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.327932  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:03.327942  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:03.328015  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:03.367259  303486 cri.go:89] found id: ""
	I0920 19:08:03.367289  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.367297  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:03.367303  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:03.367361  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:03.405843  303486 cri.go:89] found id: ""
	I0920 19:08:03.405899  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.405932  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:03.405942  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:03.406056  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:03.441026  303486 cri.go:89] found id: ""
	I0920 19:08:03.441058  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.441069  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:03.441078  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:03.441147  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:03.477213  303486 cri.go:89] found id: ""
	I0920 19:08:03.477249  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.477261  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:03.477327  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:03.477415  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:03.515843  303486 cri.go:89] found id: ""
	I0920 19:08:03.515880  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.515888  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:03.515895  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:03.515945  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:03.566972  303486 cri.go:89] found id: ""
	I0920 19:08:03.567009  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.567020  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:03.567028  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:03.567097  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:03.616957  303486 cri.go:89] found id: ""
	I0920 19:08:03.617000  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.617013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:03.617029  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:03.617048  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:03.683140  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:03.683192  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:03.697225  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:03.697267  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:03.770430  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:03.770455  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:03.770478  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:03.848796  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:03.848836  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:01.284245  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:03.284437  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:01.964058  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:04.462786  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:03.607506  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.106008  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.387706  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:06.401600  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:06.401669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:06.437854  303486 cri.go:89] found id: ""
	I0920 19:08:06.437890  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.437917  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:06.437926  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:06.437993  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:06.472617  303486 cri.go:89] found id: ""
	I0920 19:08:06.472647  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.472655  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:06.472662  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:06.472718  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:06.510083  303486 cri.go:89] found id: ""
	I0920 19:08:06.510118  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.510131  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:06.510140  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:06.510212  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:06.546388  303486 cri.go:89] found id: ""
	I0920 19:08:06.546418  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.546427  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:06.546434  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:06.546485  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:06.584043  303486 cri.go:89] found id: ""
	I0920 19:08:06.584084  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.584096  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:06.584106  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:06.584182  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:06.622118  303486 cri.go:89] found id: ""
	I0920 19:08:06.622147  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.622155  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:06.622161  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:06.622217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:06.655513  303486 cri.go:89] found id: ""
	I0920 19:08:06.655552  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.655585  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:06.655593  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:06.655657  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:06.690286  303486 cri.go:89] found id: ""
	I0920 19:08:06.690324  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.690336  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:06.690350  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:06.690368  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:06.729229  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:06.729259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:06.780368  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:06.780411  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:06.794746  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:06.794782  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:06.866918  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:06.866944  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:06.866967  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:05.784123  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.284383  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.462855  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.466867  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:10.963736  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.106490  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:10.606291  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:09.451583  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:09.465111  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:09.465178  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:09.497679  303486 cri.go:89] found id: ""
	I0920 19:08:09.497713  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.497725  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:09.497733  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:09.497797  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:09.535297  303486 cri.go:89] found id: ""
	I0920 19:08:09.535334  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.535345  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:09.535353  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:09.535427  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:09.572449  303486 cri.go:89] found id: ""
	I0920 19:08:09.572482  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.572491  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:09.572498  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:09.572608  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:09.612672  303486 cri.go:89] found id: ""
	I0920 19:08:09.612697  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.612705  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:09.612711  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:09.612797  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:09.654366  303486 cri.go:89] found id: ""
	I0920 19:08:09.654399  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.654408  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:09.654415  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:09.654470  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:09.694825  303486 cri.go:89] found id: ""
	I0920 19:08:09.694858  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.694870  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:09.694878  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:09.694942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:09.731618  303486 cri.go:89] found id: ""
	I0920 19:08:09.731682  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.731693  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:09.731702  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:09.731775  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:09.766717  303486 cri.go:89] found id: ""
	I0920 19:08:09.766755  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.766765  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:09.766779  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:09.766794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:09.823505  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:09.823549  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:09.837622  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:09.837658  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:09.919105  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:09.919139  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:09.919156  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:10.000899  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:10.000943  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:12.542974  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:12.557265  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:12.557335  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:12.594099  303486 cri.go:89] found id: ""
	I0920 19:08:12.594126  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.594134  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:12.594140  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:12.594199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:12.627271  303486 cri.go:89] found id: ""
	I0920 19:08:12.627301  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.627308  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:12.627314  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:12.627366  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:12.661225  303486 cri.go:89] found id: ""
	I0920 19:08:12.661256  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.661265  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:12.661272  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:12.661332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:12.701381  303486 cri.go:89] found id: ""
	I0920 19:08:12.701424  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.701437  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:12.701447  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:12.701524  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:12.739189  303486 cri.go:89] found id: ""
	I0920 19:08:12.739227  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.739235  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:12.739246  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:12.739299  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:12.780931  303486 cri.go:89] found id: ""
	I0920 19:08:12.780958  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.781055  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:12.781068  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:12.781124  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:12.818097  303486 cri.go:89] found id: ""
	I0920 19:08:12.818137  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.818150  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:12.818161  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:12.818294  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:12.852925  303486 cri.go:89] found id: ""
	I0920 19:08:12.852957  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.852965  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:12.852975  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:12.852990  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:12.924746  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:12.924774  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:12.924791  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:13.005668  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:13.005718  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:13.044327  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:13.044359  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:13.094788  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:13.094833  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:10.284510  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:12.783546  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:12.964694  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.463615  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:13.105052  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.604922  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.611965  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:15.625857  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:15.625960  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:15.662138  303486 cri.go:89] found id: ""
	I0920 19:08:15.662169  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.662177  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:15.662184  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:15.662261  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:15.696000  303486 cri.go:89] found id: ""
	I0920 19:08:15.696067  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.696100  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:15.696115  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:15.696234  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:15.735594  303486 cri.go:89] found id: ""
	I0920 19:08:15.735625  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.735633  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:15.735640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:15.735699  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:15.774666  303486 cri.go:89] found id: ""
	I0920 19:08:15.774693  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.774703  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:15.774712  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:15.774777  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:15.810754  303486 cri.go:89] found id: ""
	I0920 19:08:15.810799  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.810811  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:15.810820  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:15.810884  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:15.846709  303486 cri.go:89] found id: ""
	I0920 19:08:15.846739  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.846748  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:15.846757  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:15.846819  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:15.880798  303486 cri.go:89] found id: ""
	I0920 19:08:15.880825  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.880833  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:15.880839  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:15.880895  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:15.915119  303486 cri.go:89] found id: ""
	I0920 19:08:15.915150  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.915159  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:15.915170  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:15.915186  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:15.966048  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:15.966087  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:15.979287  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:15.979322  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:16.052129  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:16.052163  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:16.052180  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:16.137743  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:16.137788  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:18.678389  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:18.693073  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:18.693152  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:18.734909  303486 cri.go:89] found id: ""
	I0920 19:08:18.734943  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.734954  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:18.734962  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:18.735028  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:18.773472  303486 cri.go:89] found id: ""
	I0920 19:08:18.773506  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.773517  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:18.773525  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:18.773620  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:18.812184  303486 cri.go:89] found id: ""
	I0920 19:08:18.812218  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.812228  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:18.812236  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:18.812305  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:18.846569  303486 cri.go:89] found id: ""
	I0920 19:08:18.846608  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.846619  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:18.846627  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:18.846700  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:18.881794  303486 cri.go:89] found id: ""
	I0920 19:08:18.881836  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.881862  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:18.881870  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:18.881943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:18.919657  303486 cri.go:89] found id: ""
	I0920 19:08:18.919688  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.919698  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:18.919708  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:18.919774  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:14.784734  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:17.283590  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:19.284056  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:17.962913  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:20.462190  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:18.105736  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:20.106314  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:22.605231  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:18.955117  303486 cri.go:89] found id: ""
	I0920 19:08:18.955146  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.955157  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:18.955166  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:18.955243  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:18.992389  303486 cri.go:89] found id: ""
	I0920 19:08:18.992422  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.992430  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:18.992444  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:18.992460  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:19.070374  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:19.070417  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:19.110793  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:19.110825  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:19.163783  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:19.163830  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:19.177348  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:19.177387  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:19.249469  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:21.749644  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:21.764920  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:21.765006  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:21.803443  303486 cri.go:89] found id: ""
	I0920 19:08:21.803473  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.803481  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:21.803489  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:21.803545  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:21.844552  303486 cri.go:89] found id: ""
	I0920 19:08:21.844582  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.844593  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:21.844601  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:21.844672  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:21.878979  303486 cri.go:89] found id: ""
	I0920 19:08:21.879007  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.879017  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:21.879029  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:21.879099  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:21.915745  303486 cri.go:89] found id: ""
	I0920 19:08:21.915773  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.915783  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:21.915794  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:21.915865  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:21.948999  303486 cri.go:89] found id: ""
	I0920 19:08:21.949031  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.949043  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:21.949052  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:21.949118  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:21.984238  303486 cri.go:89] found id: ""
	I0920 19:08:21.984269  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.984277  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:21.984284  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:21.984357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:22.018581  303486 cri.go:89] found id: ""
	I0920 19:08:22.018610  303486 logs.go:276] 0 containers: []
	W0920 19:08:22.018620  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:22.018628  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:22.018694  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:22.051868  303486 cri.go:89] found id: ""
	I0920 19:08:22.051903  303486 logs.go:276] 0 containers: []
	W0920 19:08:22.051913  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:22.051925  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:22.051942  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:22.106711  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:22.106756  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:22.120910  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:22.120940  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:22.196564  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:22.196591  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:22.196608  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:22.275235  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:22.275288  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:21.785129  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.284359  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:22.463122  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.962694  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:25.105050  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:27.105237  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.821956  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:24.836846  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:24.836918  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:24.878371  303486 cri.go:89] found id: ""
	I0920 19:08:24.878398  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.878406  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:24.878413  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:24.878464  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:24.911450  303486 cri.go:89] found id: ""
	I0920 19:08:24.911480  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.911489  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:24.911497  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:24.911590  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:24.949248  303486 cri.go:89] found id: ""
	I0920 19:08:24.949281  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.949289  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:24.949298  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:24.949352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:24.987899  303486 cri.go:89] found id: ""
	I0920 19:08:24.987932  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.987939  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:24.987948  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:24.988011  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:25.020589  303486 cri.go:89] found id: ""
	I0920 19:08:25.020627  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.020638  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:25.020646  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:25.020701  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:25.060223  303486 cri.go:89] found id: ""
	I0920 19:08:25.060250  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.060258  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:25.060266  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:25.060331  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:25.099111  303486 cri.go:89] found id: ""
	I0920 19:08:25.099141  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.099151  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:25.099160  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:25.099242  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:25.136055  303486 cri.go:89] found id: ""
	I0920 19:08:25.136089  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.136098  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:25.136118  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:25.136135  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:25.187619  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:25.187658  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:25.200983  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:25.201016  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:25.270746  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:25.270778  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:25.270795  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:25.350009  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:25.350050  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:27.889864  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:27.903156  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:27.903231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:27.935087  303486 cri.go:89] found id: ""
	I0920 19:08:27.935118  303486 logs.go:276] 0 containers: []
	W0920 19:08:27.935128  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:27.935138  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:27.935199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:27.970451  303486 cri.go:89] found id: ""
	I0920 19:08:27.970479  303486 logs.go:276] 0 containers: []
	W0920 19:08:27.970487  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:27.970494  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:27.970545  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:28.004931  303486 cri.go:89] found id: ""
	I0920 19:08:28.004980  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.004992  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:28.005002  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:28.005068  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:28.039438  303486 cri.go:89] found id: ""
	I0920 19:08:28.039470  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.039478  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:28.039485  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:28.039535  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:28.076023  303486 cri.go:89] found id: ""
	I0920 19:08:28.076050  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.076058  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:28.076064  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:28.076131  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:28.114726  303486 cri.go:89] found id: ""
	I0920 19:08:28.114761  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.114772  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:28.114781  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:28.114846  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:28.150790  303486 cri.go:89] found id: ""
	I0920 19:08:28.150822  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.150832  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:28.150841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:28.150908  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:28.186576  303486 cri.go:89] found id: ""
	I0920 19:08:28.186606  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.186614  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:28.186626  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:28.186648  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:28.240939  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:28.240984  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:28.255267  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:28.255304  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:28.327773  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:28.327797  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:28.327809  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:28.418011  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:28.418055  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:26.785099  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:29.284297  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:26.962825  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:28.963261  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:30.963575  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:29.605453  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:32.104848  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:30.962398  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:30.975385  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:30.975471  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:31.009898  303486 cri.go:89] found id: ""
	I0920 19:08:31.009952  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.009964  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:31.009973  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:31.010044  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:31.043639  303486 cri.go:89] found id: ""
	I0920 19:08:31.043670  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.043679  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:31.043689  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:31.043758  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:31.077709  303486 cri.go:89] found id: ""
	I0920 19:08:31.077745  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.077753  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:31.077759  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:31.077818  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:31.111117  303486 cri.go:89] found id: ""
	I0920 19:08:31.111150  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.111160  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:31.111168  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:31.111234  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:31.143888  303486 cri.go:89] found id: ""
	I0920 19:08:31.143921  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.143933  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:31.143942  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:31.144014  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:31.176694  303486 cri.go:89] found id: ""
	I0920 19:08:31.176729  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.176742  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:31.176751  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:31.176815  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:31.213794  303486 cri.go:89] found id: ""
	I0920 19:08:31.213832  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.213844  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:31.213854  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:31.213946  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:31.250160  303486 cri.go:89] found id: ""
	I0920 19:08:31.250219  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.250230  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:31.250244  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:31.250261  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:31.263748  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:31.263784  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:31.337719  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:31.337749  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:31.337762  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:31.420398  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:31.420446  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:31.459992  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:31.460030  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:31.284818  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:33.783288  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:33.462900  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:35.463122  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:34.105758  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:36.604917  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:34.014229  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:34.028129  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:34.028194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:34.060793  303486 cri.go:89] found id: ""
	I0920 19:08:34.060832  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.060850  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:34.060859  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:34.060919  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:34.094440  303486 cri.go:89] found id: ""
	I0920 19:08:34.094467  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.094475  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:34.094481  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:34.094544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:34.128824  303486 cri.go:89] found id: ""
	I0920 19:08:34.128861  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.128872  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:34.128881  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:34.128948  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:34.160861  303486 cri.go:89] found id: ""
	I0920 19:08:34.160894  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.160903  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:34.160911  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:34.160967  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:34.196897  303486 cri.go:89] found id: ""
	I0920 19:08:34.196933  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.196952  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:34.196958  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:34.197020  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:34.229083  303486 cri.go:89] found id: ""
	I0920 19:08:34.229115  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.229125  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:34.229134  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:34.229205  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:34.261877  303486 cri.go:89] found id: ""
	I0920 19:08:34.261922  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.261933  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:34.261941  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:34.262008  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:34.296145  303486 cri.go:89] found id: ""
	I0920 19:08:34.296177  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.296189  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:34.296199  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:34.296214  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:34.361598  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:34.361624  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:34.361641  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:34.441067  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:34.441110  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:34.483333  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:34.483362  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:34.538345  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:34.538388  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:37.053155  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:37.067157  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:37.067230  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:37.101432  303486 cri.go:89] found id: ""
	I0920 19:08:37.101466  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.101476  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:37.101485  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:37.101550  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:37.134375  303486 cri.go:89] found id: ""
	I0920 19:08:37.134408  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.134416  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:37.134423  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:37.134487  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:37.167049  303486 cri.go:89] found id: ""
	I0920 19:08:37.167087  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.167099  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:37.167107  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:37.167175  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:37.209358  303486 cri.go:89] found id: ""
	I0920 19:08:37.209387  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.209397  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:37.209405  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:37.209470  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:37.243227  303486 cri.go:89] found id: ""
	I0920 19:08:37.243261  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.243272  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:37.243281  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:37.243332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:37.276546  303486 cri.go:89] found id: ""
	I0920 19:08:37.276596  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.276607  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:37.276626  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:37.276688  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:37.311233  303486 cri.go:89] found id: ""
	I0920 19:08:37.311268  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.311279  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:37.311287  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:37.311352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:37.349970  303486 cri.go:89] found id: ""
	I0920 19:08:37.350003  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.350013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:37.350025  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:37.350041  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:37.399405  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:37.399445  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:37.423764  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:37.423800  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:37.498797  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:37.498826  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:37.498841  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:37.575521  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:37.575566  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:35.783897  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:37.784496  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:37.463224  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:39.463445  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:38.605444  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:40.606712  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:40.118650  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:40.131967  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:40.132051  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:40.165313  303486 cri.go:89] found id: ""
	I0920 19:08:40.165349  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.165358  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:40.165366  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:40.165439  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:40.197194  303486 cri.go:89] found id: ""
	I0920 19:08:40.197223  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.197232  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:40.197238  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:40.197289  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:40.236769  303486 cri.go:89] found id: ""
	I0920 19:08:40.236800  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.236810  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:40.236819  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:40.236888  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:40.271960  303486 cri.go:89] found id: ""
	I0920 19:08:40.271984  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.271992  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:40.271998  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:40.272049  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:40.307874  303486 cri.go:89] found id: ""
	I0920 19:08:40.307909  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.307917  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:40.307923  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:40.307982  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:40.342128  303486 cri.go:89] found id: ""
	I0920 19:08:40.342160  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.342168  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:40.342175  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:40.342233  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:40.381493  303486 cri.go:89] found id: ""
	I0920 19:08:40.381529  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.381542  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:40.381551  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:40.381617  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:40.415164  303486 cri.go:89] found id: ""
	I0920 19:08:40.415199  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.415211  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:40.415222  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:40.415238  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:40.488306  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:40.488330  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:40.488350  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:40.567193  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:40.567235  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:40.607256  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:40.607287  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:40.659504  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:40.659542  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:43.174043  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:43.188690  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:43.188790  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:43.227223  303486 cri.go:89] found id: ""
	I0920 19:08:43.227251  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.227259  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:43.227267  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:43.227356  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:43.260099  303486 cri.go:89] found id: ""
	I0920 19:08:43.260128  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.260137  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:43.260143  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:43.260195  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:43.297846  303486 cri.go:89] found id: ""
	I0920 19:08:43.297875  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.297886  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:43.297894  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:43.297980  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:43.334026  303486 cri.go:89] found id: ""
	I0920 19:08:43.334061  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.334070  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:43.334078  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:43.334147  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:43.367765  303486 cri.go:89] found id: ""
	I0920 19:08:43.367795  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.367806  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:43.367814  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:43.367890  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:43.402722  303486 cri.go:89] found id: ""
	I0920 19:08:43.402766  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.402778  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:43.402787  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:43.402852  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:43.439643  303486 cri.go:89] found id: ""
	I0920 19:08:43.439674  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.439682  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:43.439690  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:43.439742  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:43.475931  303486 cri.go:89] found id: ""
	I0920 19:08:43.475965  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.475976  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:43.475991  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:43.476006  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:43.545694  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:43.545725  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:43.545739  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:43.627493  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:43.627549  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:43.667758  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:43.667794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:43.721803  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:43.721851  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:40.285524  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:42.784336  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:41.962300  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:43.963712  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:45.963766  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:43.105271  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:45.105737  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:47.604667  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:46.237499  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:46.250854  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:46.250925  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:46.288918  303486 cri.go:89] found id: ""
	I0920 19:08:46.288950  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.288957  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:46.288964  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:46.289026  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:46.321113  303486 cri.go:89] found id: ""
	I0920 19:08:46.321149  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.321159  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:46.321168  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:46.321239  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:46.359606  303486 cri.go:89] found id: ""
	I0920 19:08:46.359643  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.359652  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:46.359659  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:46.359729  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:46.397059  303486 cri.go:89] found id: ""
	I0920 19:08:46.397089  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.397098  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:46.397104  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:46.397174  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:46.438224  303486 cri.go:89] found id: ""
	I0920 19:08:46.438261  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.438271  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:46.438279  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:46.438355  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:46.476933  303486 cri.go:89] found id: ""
	I0920 19:08:46.476963  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.476973  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:46.476981  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:46.477047  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:46.522115  303486 cri.go:89] found id: ""
	I0920 19:08:46.522150  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.522160  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:46.522167  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:46.522236  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:46.555508  303486 cri.go:89] found id: ""
	I0920 19:08:46.555541  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.555551  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:46.555565  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:46.555580  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:46.632314  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:46.632358  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:46.672381  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:46.672420  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:46.725777  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:46.725835  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:46.739924  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:46.739959  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:46.816667  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:45.284171  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:47.284420  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.284798  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:48.462088  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:50.463100  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.606279  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:52.105103  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.317620  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:49.331792  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:49.331872  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:49.365417  303486 cri.go:89] found id: ""
	I0920 19:08:49.365457  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.365470  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:49.365479  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:49.365543  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:49.399422  303486 cri.go:89] found id: ""
	I0920 19:08:49.399455  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.399465  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:49.399474  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:49.399532  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:49.433040  303486 cri.go:89] found id: ""
	I0920 19:08:49.433069  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.433076  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:49.433082  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:49.433149  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:49.466865  303486 cri.go:89] found id: ""
	I0920 19:08:49.466897  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.466909  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:49.466917  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:49.466986  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:49.499542  303486 cri.go:89] found id: ""
	I0920 19:08:49.499574  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.499583  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:49.499589  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:49.499639  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:49.534310  303486 cri.go:89] found id: ""
	I0920 19:08:49.534338  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.534346  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:49.534353  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:49.534411  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:49.580271  303486 cri.go:89] found id: ""
	I0920 19:08:49.580297  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.580305  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:49.580312  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:49.580385  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:49.626519  303486 cri.go:89] found id: ""
	I0920 19:08:49.626554  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.626562  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:49.626572  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:49.626587  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:49.682923  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:49.682963  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:49.695859  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:49.695895  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:49.767626  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:49.767669  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:49.767697  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:49.849570  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:49.849614  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:52.387653  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:52.400693  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:52.400757  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:52.434320  303486 cri.go:89] found id: ""
	I0920 19:08:52.434358  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.434369  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:52.434381  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:52.434448  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:52.469167  303486 cri.go:89] found id: ""
	I0920 19:08:52.469202  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.469214  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:52.469222  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:52.469291  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:52.504241  303486 cri.go:89] found id: ""
	I0920 19:08:52.504287  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.504295  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:52.504304  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:52.504367  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:52.539573  303486 cri.go:89] found id: ""
	I0920 19:08:52.539604  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.539613  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:52.539619  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:52.539697  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:52.573794  303486 cri.go:89] found id: ""
	I0920 19:08:52.573821  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.573829  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:52.573834  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:52.573931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:52.607628  303486 cri.go:89] found id: ""
	I0920 19:08:52.607660  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.607670  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:52.607676  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:52.607738  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:52.639088  303486 cri.go:89] found id: ""
	I0920 19:08:52.639121  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.639132  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:52.639140  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:52.639204  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:52.673585  303486 cri.go:89] found id: ""
	I0920 19:08:52.673624  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.673636  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:52.673650  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:52.673667  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:52.726463  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:52.726504  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:52.739520  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:52.739553  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:52.820610  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:52.820638  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:52.820653  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:52.898567  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:52.898612  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:51.783687  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:53.784963  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:52.962326  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:54.963069  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:54.105159  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:56.604367  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:55.440875  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:55.454526  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:55.454602  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:55.490616  303486 cri.go:89] found id: ""
	I0920 19:08:55.490655  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.490664  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:55.490671  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:55.490735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:55.530256  303486 cri.go:89] found id: ""
	I0920 19:08:55.530287  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.530296  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:55.530304  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:55.530357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:55.565209  303486 cri.go:89] found id: ""
	I0920 19:08:55.565242  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.565253  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:55.565260  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:55.565319  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:55.599522  303486 cri.go:89] found id: ""
	I0920 19:08:55.599553  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.599563  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:55.599571  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:55.599634  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:55.634662  303486 cri.go:89] found id: ""
	I0920 19:08:55.634692  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.634700  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:55.634707  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:55.634759  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:55.670326  303486 cri.go:89] found id: ""
	I0920 19:08:55.670361  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.670372  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:55.670379  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:55.670434  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:55.702589  303486 cri.go:89] found id: ""
	I0920 19:08:55.702617  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.702625  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:55.702632  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:55.702694  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:55.737615  303486 cri.go:89] found id: ""
	I0920 19:08:55.737643  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.737653  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:55.737667  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:55.737682  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:55.816827  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:55.816873  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:55.855521  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:55.855550  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:55.905002  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:55.905047  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:55.918292  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:55.918324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:55.987445  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:58.488566  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:58.503898  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:58.504001  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:58.539089  303486 cri.go:89] found id: ""
	I0920 19:08:58.539117  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.539127  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:58.539135  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:58.539199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:58.576432  303486 cri.go:89] found id: ""
	I0920 19:08:58.576459  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.576467  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:58.576473  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:58.576542  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:58.613779  303486 cri.go:89] found id: ""
	I0920 19:08:58.613814  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.613825  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:58.613833  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:58.613932  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:58.648717  303486 cri.go:89] found id: ""
	I0920 19:08:58.648757  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.648768  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:58.648777  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:58.648845  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:58.681533  303486 cri.go:89] found id: ""
	I0920 19:08:58.681568  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.681585  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:58.681593  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:58.681647  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:58.714833  303486 cri.go:89] found id: ""
	I0920 19:08:58.714867  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.714877  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:58.714886  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:58.714951  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:58.755939  303486 cri.go:89] found id: ""
	I0920 19:08:58.755972  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.755980  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:58.755986  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:58.756037  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:58.793195  303486 cri.go:89] found id: ""
	I0920 19:08:58.793229  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.793240  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:58.793252  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:58.793267  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:58.807903  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:58.807939  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:58.873993  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:58.874022  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:58.874042  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:56.283846  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.286474  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:56.963398  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.963513  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.606087  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:01.106199  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.955201  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:58.955249  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:58.994230  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:58.994265  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:01.548403  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:01.561467  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:01.561541  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:01.595339  303486 cri.go:89] found id: ""
	I0920 19:09:01.595374  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.595382  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:01.595388  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:01.595463  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:01.631995  303486 cri.go:89] found id: ""
	I0920 19:09:01.632033  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.632043  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:01.632051  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:01.632119  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:01.667556  303486 cri.go:89] found id: ""
	I0920 19:09:01.667586  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.667596  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:01.667604  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:01.667669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:01.702678  303486 cri.go:89] found id: ""
	I0920 19:09:01.702708  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.702716  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:01.702723  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:01.702786  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:01.739953  303486 cri.go:89] found id: ""
	I0920 19:09:01.739987  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.739999  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:01.740008  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:01.740075  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:01.774188  303486 cri.go:89] found id: ""
	I0920 19:09:01.774222  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.774239  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:01.774249  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:01.774317  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:01.808885  303486 cri.go:89] found id: ""
	I0920 19:09:01.808916  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.808927  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:01.808935  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:01.808997  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:01.842357  303486 cri.go:89] found id: ""
	I0920 19:09:01.842394  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.842404  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:01.842417  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:01.842433  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:01.881750  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:01.881782  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:01.932190  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:01.932236  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:01.946305  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:01.946337  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:02.020099  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:02.020127  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:02.020141  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:00.784428  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.284109  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:01.462613  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.962360  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:05.963735  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.605623  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:06.104994  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:04.601186  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:04.614292  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:04.614374  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:04.649579  303486 cri.go:89] found id: ""
	I0920 19:09:04.649611  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.649619  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:04.649625  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:04.649683  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:04.684039  303486 cri.go:89] found id: ""
	I0920 19:09:04.684076  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.684094  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:04.684108  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:04.684182  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:04.729130  303486 cri.go:89] found id: ""
	I0920 19:09:04.729166  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.729177  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:04.729186  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:04.729244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:04.762646  303486 cri.go:89] found id: ""
	I0920 19:09:04.762682  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.762690  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:04.762697  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:04.762761  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:04.797492  303486 cri.go:89] found id: ""
	I0920 19:09:04.797518  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.797527  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:04.797533  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:04.797588  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:04.832780  303486 cri.go:89] found id: ""
	I0920 19:09:04.832813  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.832823  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:04.832831  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:04.832893  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:04.868489  303486 cri.go:89] found id: ""
	I0920 19:09:04.868526  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.868537  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:04.868546  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:04.868613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:04.901115  303486 cri.go:89] found id: ""
	I0920 19:09:04.901156  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.901164  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:04.901174  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:04.901186  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:04.952435  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:04.952482  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:04.966450  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:04.966481  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:05.035951  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:05.035977  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:05.035991  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:05.120961  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:05.121016  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:07.659497  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:07.672989  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:07.673062  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:07.708200  303486 cri.go:89] found id: ""
	I0920 19:09:07.708236  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.708247  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:07.708256  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:07.708320  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:07.742116  303486 cri.go:89] found id: ""
	I0920 19:09:07.742156  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.742166  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:07.742175  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:07.742231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:07.774369  303486 cri.go:89] found id: ""
	I0920 19:09:07.774401  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.774410  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:07.774419  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:07.774485  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:07.811727  303486 cri.go:89] found id: ""
	I0920 19:09:07.811756  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.811763  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:07.811769  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:07.811825  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:07.849613  303486 cri.go:89] found id: ""
	I0920 19:09:07.849646  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.849655  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:07.849661  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:07.849715  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:07.884643  303486 cri.go:89] found id: ""
	I0920 19:09:07.884679  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.884690  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:07.884698  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:07.884770  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:07.920240  303486 cri.go:89] found id: ""
	I0920 19:09:07.920272  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.920283  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:07.920292  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:07.920371  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:07.954729  303486 cri.go:89] found id: ""
	I0920 19:09:07.954768  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.954780  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:07.954792  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:07.954808  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:08.008679  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:08.008732  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:08.023637  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:08.023673  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:08.097298  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:08.097325  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:08.097340  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:08.173404  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:08.173444  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:05.783765  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.283642  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.462994  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.965062  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.106350  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.605138  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:12.605390  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.718224  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:10.732520  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:10.732593  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:10.766764  303486 cri.go:89] found id: ""
	I0920 19:09:10.766800  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.766811  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:10.766821  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:10.766887  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:10.800039  303486 cri.go:89] found id: ""
	I0920 19:09:10.800077  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.800087  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:10.800095  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:10.800157  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:10.833931  303486 cri.go:89] found id: ""
	I0920 19:09:10.833969  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.833979  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:10.833985  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:10.834057  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:10.867714  303486 cri.go:89] found id: ""
	I0920 19:09:10.867752  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.867763  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:10.867771  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:10.867840  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:10.903026  303486 cri.go:89] found id: ""
	I0920 19:09:10.903060  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.903068  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:10.903075  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:10.903131  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:10.936968  303486 cri.go:89] found id: ""
	I0920 19:09:10.937002  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.937013  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:10.937021  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:10.937089  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:10.973055  303486 cri.go:89] found id: ""
	I0920 19:09:10.973079  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.973087  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:10.973093  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:10.973145  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:11.010283  303486 cri.go:89] found id: ""
	I0920 19:09:11.010310  303486 logs.go:276] 0 containers: []
	W0920 19:09:11.010321  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:11.010333  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:11.010352  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:11.025202  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:11.025239  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:11.104268  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:11.104295  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:11.104312  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:11.182281  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:11.182326  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:11.219296  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:11.219335  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:13.767833  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:13.780805  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:13.780890  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:13.822288  303486 cri.go:89] found id: ""
	I0920 19:09:13.822317  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.822327  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:13.822334  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:13.822388  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:13.862068  303486 cri.go:89] found id: ""
	I0920 19:09:13.862098  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.862106  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:13.862112  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:13.862163  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:13.898497  303486 cri.go:89] found id: ""
	I0920 19:09:13.898529  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.898540  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:13.898550  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:13.898618  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:13.935994  303486 cri.go:89] found id: ""
	I0920 19:09:13.936022  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.936030  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:13.936038  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:13.936105  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:10.277863  302869 pod_ready.go:82] duration metric: took 4m0.000569658s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" ...
	E0920 19:09:10.277919  302869 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 19:09:10.277965  302869 pod_ready.go:39] duration metric: took 4m13.052343801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:10.278003  302869 kubeadm.go:597] duration metric: took 4m21.10965758s to restartPrimaryControlPlane
	W0920 19:09:10.278125  302869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:09:10.278168  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:09:13.462752  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:15.962371  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:14.605565  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:17.112026  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:13.973764  303486 cri.go:89] found id: ""
	I0920 19:09:13.973801  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.973812  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:13.973820  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:13.973898  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:14.009443  303486 cri.go:89] found id: ""
	I0920 19:09:14.009482  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.009494  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:14.009502  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:14.009577  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:14.045593  303486 cri.go:89] found id: ""
	I0920 19:09:14.045629  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.045639  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:14.045648  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:14.045714  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:14.086273  303486 cri.go:89] found id: ""
	I0920 19:09:14.086310  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.086319  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:14.086330  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:14.086343  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:14.140730  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:14.140772  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:14.154198  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:14.154232  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:14.224716  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:14.224739  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:14.224754  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:14.302625  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:14.302665  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:16.840816  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:16.854905  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:16.855002  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:16.892994  303486 cri.go:89] found id: ""
	I0920 19:09:16.893028  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.893038  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:16.893045  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:16.893103  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:16.931265  303486 cri.go:89] found id: ""
	I0920 19:09:16.931293  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.931307  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:16.931313  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:16.931364  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:16.970085  303486 cri.go:89] found id: ""
	I0920 19:09:16.970119  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.970129  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:16.970138  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:16.970189  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:17.003163  303486 cri.go:89] found id: ""
	I0920 19:09:17.003194  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.003206  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:17.003214  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:17.003282  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:17.040577  303486 cri.go:89] found id: ""
	I0920 19:09:17.040618  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.040633  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:17.040640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:17.040706  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:17.073946  303486 cri.go:89] found id: ""
	I0920 19:09:17.073986  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.073995  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:17.074006  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:17.074066  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:17.111569  303486 cri.go:89] found id: ""
	I0920 19:09:17.111636  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.111648  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:17.111664  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:17.111730  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:17.148005  303486 cri.go:89] found id: ""
	I0920 19:09:17.148034  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.148044  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:17.148056  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:17.148072  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:17.222281  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:17.222306  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:17.222324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:17.297577  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:17.297619  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:17.334709  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:17.334740  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:17.386279  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:17.386320  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:17.962802  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.963289  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.605813  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:22.105024  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.901017  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:19.914489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:19.914571  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:19.955023  303486 cri.go:89] found id: ""
	I0920 19:09:19.955051  303486 logs.go:276] 0 containers: []
	W0920 19:09:19.955060  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:19.955067  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:19.955125  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:19.995536  303486 cri.go:89] found id: ""
	I0920 19:09:19.995575  303486 logs.go:276] 0 containers: []
	W0920 19:09:19.995585  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:19.995594  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:19.995650  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:20.031153  303486 cri.go:89] found id: ""
	I0920 19:09:20.031181  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.031190  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:20.031198  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:20.031266  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:20.064145  303486 cri.go:89] found id: ""
	I0920 19:09:20.064174  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.064190  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:20.064199  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:20.064256  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:20.098399  303486 cri.go:89] found id: ""
	I0920 19:09:20.098429  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.098440  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:20.098449  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:20.098505  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:20.138805  303486 cri.go:89] found id: ""
	I0920 19:09:20.138833  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.138843  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:20.138852  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:20.138914  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:20.183291  303486 cri.go:89] found id: ""
	I0920 19:09:20.183322  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.183333  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:20.183342  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:20.183406  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:20.220344  303486 cri.go:89] found id: ""
	I0920 19:09:20.220378  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.220396  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:20.220409  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:20.220426  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:20.271043  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:20.271086  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:20.286724  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:20.286754  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:20.358233  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:20.358273  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:20.358291  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:20.439511  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:20.439568  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:22.982570  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:22.995384  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:22.995475  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:23.029031  303486 cri.go:89] found id: ""
	I0920 19:09:23.029069  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.029081  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:23.029091  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:23.029166  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:23.063291  303486 cri.go:89] found id: ""
	I0920 19:09:23.063325  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.063336  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:23.063343  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:23.063413  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:23.097494  303486 cri.go:89] found id: ""
	I0920 19:09:23.097525  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.097536  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:23.097545  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:23.097610  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:23.132169  303486 cri.go:89] found id: ""
	I0920 19:09:23.132197  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.132204  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:23.132211  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:23.132276  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:23.173651  303486 cri.go:89] found id: ""
	I0920 19:09:23.173682  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.173692  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:23.173700  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:23.173763  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:23.206098  303486 cri.go:89] found id: ""
	I0920 19:09:23.206135  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.206146  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:23.206155  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:23.206216  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:23.245422  303486 cri.go:89] found id: ""
	I0920 19:09:23.245466  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.245479  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:23.245489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:23.245569  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:23.280326  303486 cri.go:89] found id: ""
	I0920 19:09:23.280357  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.280365  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:23.280376  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:23.280390  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:23.330986  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:23.331034  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:23.344751  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:23.344788  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:23.420213  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:23.420239  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:23.420255  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:23.500449  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:23.500491  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:22.462590  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:24.962516  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:24.105502  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:26.110930  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:26.040050  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:26.056377  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:26.056463  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:26.094122  303486 cri.go:89] found id: ""
	I0920 19:09:26.094160  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.094170  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:26.094179  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:26.094246  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:26.129383  303486 cri.go:89] found id: ""
	I0920 19:09:26.129408  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.129415  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:26.129422  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:26.129472  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:26.163579  303486 cri.go:89] found id: ""
	I0920 19:09:26.163611  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.163621  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:26.163630  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:26.163699  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:26.208026  303486 cri.go:89] found id: ""
	I0920 19:09:26.208057  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.208065  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:26.208071  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:26.208138  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:26.245375  303486 cri.go:89] found id: ""
	I0920 19:09:26.245409  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.245421  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:26.245438  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:26.245500  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:26.280283  303486 cri.go:89] found id: ""
	I0920 19:09:26.280315  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.280326  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:26.280336  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:26.280397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:26.314621  303486 cri.go:89] found id: ""
	I0920 19:09:26.314657  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.314670  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:26.314679  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:26.314773  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:26.347667  303486 cri.go:89] found id: ""
	I0920 19:09:26.347694  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.347701  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:26.347711  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:26.347722  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:26.397221  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:26.397259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:26.411126  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:26.411157  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:26.479631  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:26.479657  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:26.479686  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:26.555439  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:26.555477  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:26.962845  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:28.963560  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:28.605949  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:30.612349  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:32.104187  303063 pod_ready.go:82] duration metric: took 4m0.005608637s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	E0920 19:09:32.104213  303063 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 19:09:32.104224  303063 pod_ready.go:39] duration metric: took 4m5.679030104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:32.104241  303063 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:09:32.104273  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:32.104327  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:32.151755  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:32.151778  303063 cri.go:89] found id: ""
	I0920 19:09:32.151787  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:32.151866  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.157358  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:32.157426  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:32.201227  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:32.201255  303063 cri.go:89] found id: ""
	I0920 19:09:32.201263  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:32.201327  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.206508  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:32.206604  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:32.243509  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:32.243533  303063 cri.go:89] found id: ""
	I0920 19:09:32.243542  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:32.243595  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.247764  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:32.247836  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:32.283590  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:32.283627  303063 cri.go:89] found id: ""
	I0920 19:09:32.283637  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:32.283727  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.287826  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:32.287893  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:32.329071  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:32.329111  303063 cri.go:89] found id: ""
	I0920 19:09:32.329123  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:32.329196  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.333152  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:32.333236  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:32.372444  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:32.372474  303063 cri.go:89] found id: ""
	I0920 19:09:32.372485  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:32.372548  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.376414  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:32.376494  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:32.412244  303063 cri.go:89] found id: ""
	I0920 19:09:32.412280  303063 logs.go:276] 0 containers: []
	W0920 19:09:32.412291  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:32.412299  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:32.412352  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:32.449451  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:32.449472  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:32.449477  303063 cri.go:89] found id: ""
	I0920 19:09:32.449485  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:32.449544  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.454960  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.459688  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:32.459720  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:32.599208  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:32.599241  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:32.656960  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:32.657000  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:32.703259  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:32.703308  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:32.769218  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:32.769260  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:29.096877  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:29.110081  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:29.110170  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:29.152570  303486 cri.go:89] found id: ""
	I0920 19:09:29.152598  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.152608  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:29.152616  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:29.152689  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:29.188596  303486 cri.go:89] found id: ""
	I0920 19:09:29.188627  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.188638  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:29.188645  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:29.188713  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:29.228789  303486 cri.go:89] found id: ""
	I0920 19:09:29.228831  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.228841  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:29.228850  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:29.228913  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:29.260013  303486 cri.go:89] found id: ""
	I0920 19:09:29.260040  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.260048  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:29.260054  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:29.260105  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:29.293373  303486 cri.go:89] found id: ""
	I0920 19:09:29.293401  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.293411  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:29.293418  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:29.293487  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:29.325860  303486 cri.go:89] found id: ""
	I0920 19:09:29.325898  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.325925  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:29.325935  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:29.326027  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:29.358873  303486 cri.go:89] found id: ""
	I0920 19:09:29.358909  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.358921  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:29.358930  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:29.358994  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:29.392029  303486 cri.go:89] found id: ""
	I0920 19:09:29.392057  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.392067  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:29.392080  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:29.392095  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:29.467460  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:29.467504  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:29.508258  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:29.508298  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:29.559238  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:29.559274  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:29.574233  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:29.574264  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:29.649318  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:32.150539  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:32.168442  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:32.168527  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:32.210069  303486 cri.go:89] found id: ""
	I0920 19:09:32.210103  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.210120  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:32.210129  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:32.210191  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:32.243468  303486 cri.go:89] found id: ""
	I0920 19:09:32.243501  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.243511  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:32.243519  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:32.243586  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:32.275958  303486 cri.go:89] found id: ""
	I0920 19:09:32.275988  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.275996  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:32.276003  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:32.276056  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:32.312560  303486 cri.go:89] found id: ""
	I0920 19:09:32.312598  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.312609  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:32.312620  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:32.312695  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:32.347157  303486 cri.go:89] found id: ""
	I0920 19:09:32.347185  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.347193  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:32.347200  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:32.347264  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:32.382787  303486 cri.go:89] found id: ""
	I0920 19:09:32.382820  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.382832  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:32.382841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:32.382898  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:32.416182  303486 cri.go:89] found id: ""
	I0920 19:09:32.416216  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.416226  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:32.416234  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:32.416297  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:32.448863  303486 cri.go:89] found id: ""
	I0920 19:09:32.448895  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.448906  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:32.448919  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:32.448934  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:32.501882  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:32.501934  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:32.517984  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:32.518014  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:32.588517  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:32.588547  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:32.588560  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:32.671869  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:32.671921  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:35.211780  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:35.225476  303486 kubeadm.go:597] duration metric: took 4m2.827297435s to restartPrimaryControlPlane
	W0920 19:09:35.225582  303486 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:09:35.225618  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:09:35.686956  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:35.701803  303486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:09:35.712572  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:09:35.722867  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:09:35.722894  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:09:35.722948  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:09:35.732295  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:09:35.732358  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:09:35.741569  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:09:35.750515  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:09:35.750577  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:09:35.760469  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:09:35.770207  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:09:35.770284  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:09:35.780121  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:09:35.789887  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:09:35.789974  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:09:35.800914  303486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:09:35.871635  303486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:09:35.871691  303486 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:09:36.021411  303486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:09:36.021565  303486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:09:36.021773  303486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:09:36.217540  303486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:09:31.462557  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:33.463284  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:35.964501  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:36.723149  302869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.444941441s)
	I0920 19:09:36.723244  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:36.740763  302869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:09:36.751727  302869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:09:36.762710  302869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:09:36.762736  302869 kubeadm.go:157] found existing configuration files:
	
	I0920 19:09:36.762793  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:09:36.773454  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:09:36.773536  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:09:36.784738  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:09:36.794740  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:09:36.794818  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:09:36.805727  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:09:36.818253  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:09:36.818329  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:09:36.831210  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:09:36.842838  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:09:36.842914  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:09:36.853306  302869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:09:36.903121  302869 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:09:36.903285  302869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:09:37.025789  302869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:09:37.025969  302869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:09:37.026110  302869 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:09:37.034613  302869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:09:36.219542  303486 out.go:235]   - Generating certificates and keys ...
	I0920 19:09:36.219684  303486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:09:36.219769  303486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:09:36.219892  303486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:09:36.219973  303486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:09:36.220090  303486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:09:36.220181  303486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:09:36.220302  303486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:09:36.220414  303486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:09:36.220530  303486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:09:36.220626  303486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:09:36.220691  303486 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:09:36.220767  303486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:09:36.377012  303486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:09:36.706154  303486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:09:36.907341  303486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:09:37.091990  303486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:09:37.122813  303486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:09:37.124422  303486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:09:37.124531  303486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:09:37.277461  303486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:09:33.294289  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:33.294346  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:33.362317  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:33.362364  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:33.375712  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:33.375747  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:33.411136  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:33.411168  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:33.445649  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:33.445690  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:33.478869  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:33.478898  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:33.529433  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:33.529480  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:33.570515  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:33.570560  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.107490  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:36.124979  303063 api_server.go:72] duration metric: took 4m17.429642296s to wait for apiserver process to appear ...
	I0920 19:09:36.125014  303063 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:09:36.125069  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:36.125145  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:36.181962  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:36.181990  303063 cri.go:89] found id: ""
	I0920 19:09:36.182001  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:36.182061  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.186792  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:36.186876  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:36.235963  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:36.235993  303063 cri.go:89] found id: ""
	I0920 19:09:36.236003  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:36.236066  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.241177  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:36.241321  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:36.288324  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.288353  303063 cri.go:89] found id: ""
	I0920 19:09:36.288361  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:36.288415  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.293328  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:36.293413  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:36.335126  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:36.335153  303063 cri.go:89] found id: ""
	I0920 19:09:36.335163  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:36.335226  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.339400  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:36.339470  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:36.375555  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:36.375582  303063 cri.go:89] found id: ""
	I0920 19:09:36.375592  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:36.375657  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.379679  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:36.379753  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:36.415398  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:36.415424  303063 cri.go:89] found id: ""
	I0920 19:09:36.415434  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:36.415495  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.420183  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:36.420260  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:36.462018  303063 cri.go:89] found id: ""
	I0920 19:09:36.462049  303063 logs.go:276] 0 containers: []
	W0920 19:09:36.462060  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:36.462068  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:36.462129  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:36.515520  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:36.515551  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:36.515557  303063 cri.go:89] found id: ""
	I0920 19:09:36.515567  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:36.515628  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.520140  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.524197  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:36.524222  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:36.589535  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:36.589570  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.628836  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:36.628865  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:36.667614  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:36.667654  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:37.164164  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:37.164222  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:37.253505  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:37.253550  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:37.272704  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:37.272742  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:37.315827  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:37.315869  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:37.360449  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:37.360479  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:37.428225  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:37.428270  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:37.469766  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:37.469795  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:37.524517  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:37.524553  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:37.652128  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:37.652162  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:37.036846  302869 out.go:235]   - Generating certificates and keys ...
	I0920 19:09:37.036956  302869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:09:37.037061  302869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:09:37.037194  302869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:09:37.037284  302869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:09:37.037386  302869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:09:37.037462  302869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:09:37.037546  302869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:09:37.037635  302869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:09:37.037734  302869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:09:37.037847  302869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:09:37.037918  302869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:09:37.037995  302869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:09:37.116270  302869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:09:37.615537  302869 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:09:37.907479  302869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:09:38.090167  302869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:09:38.209430  302869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:09:38.209780  302869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:09:38.212626  302869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:09:37.279714  303486 out.go:235]   - Booting up control plane ...
	I0920 19:09:37.279861  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:09:37.288448  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:09:37.289724  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:09:37.290822  303486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:09:37.294106  303486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:09:38.214873  302869 out.go:235]   - Booting up control plane ...
	I0920 19:09:38.214994  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:09:38.215102  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:09:38.215199  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:09:38.232798  302869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:09:38.238716  302869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:09:38.238784  302869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:09:38.370841  302869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:09:38.371037  302869 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:09:38.463252  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:40.463322  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:40.212781  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:09:40.217868  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 200:
	ok
	I0920 19:09:40.219021  303063 api_server.go:141] control plane version: v1.31.1
	I0920 19:09:40.219044  303063 api_server.go:131] duration metric: took 4.094023157s to wait for apiserver health ...
	I0920 19:09:40.219053  303063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:09:40.219077  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:40.219128  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:40.264337  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:40.264365  303063 cri.go:89] found id: ""
	I0920 19:09:40.264376  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:40.264434  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.270143  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:40.270222  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:40.321696  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:40.321723  303063 cri.go:89] found id: ""
	I0920 19:09:40.321733  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:40.321799  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.329068  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:40.329149  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:40.387241  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:40.387329  303063 cri.go:89] found id: ""
	I0920 19:09:40.387357  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:40.387427  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.392896  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:40.392975  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:40.429173  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:40.429200  303063 cri.go:89] found id: ""
	I0920 19:09:40.429210  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:40.429284  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.434102  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:40.434179  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:40.480569  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:40.480598  303063 cri.go:89] found id: ""
	I0920 19:09:40.480607  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:40.480669  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.485821  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:40.485935  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:40.531502  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:40.531543  303063 cri.go:89] found id: ""
	I0920 19:09:40.531554  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:40.531613  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.535699  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:40.535769  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:40.569788  303063 cri.go:89] found id: ""
	I0920 19:09:40.569823  303063 logs.go:276] 0 containers: []
	W0920 19:09:40.569835  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:40.569842  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:40.569928  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:40.604668  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:40.604703  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:40.604710  303063 cri.go:89] found id: ""
	I0920 19:09:40.604721  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:40.604790  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.608948  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.613331  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:40.613360  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:40.657680  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:40.657726  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:40.698087  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:40.698125  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:40.753643  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:40.753683  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:40.791741  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:40.791790  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:41.176451  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:41.176497  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:41.226352  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:41.226386  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:41.307652  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:41.307694  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:41.323271  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:41.323307  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:41.441151  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:41.441195  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:41.495438  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:41.495494  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:41.543879  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:41.543930  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:41.595010  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:41.595055  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:44.140048  303063 system_pods.go:59] 8 kube-system pods found
	I0920 19:09:44.140078  303063 system_pods.go:61] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running
	I0920 19:09:44.140083  303063 system_pods.go:61] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running
	I0920 19:09:44.140087  303063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running
	I0920 19:09:44.140091  303063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running
	I0920 19:09:44.140094  303063 system_pods.go:61] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running
	I0920 19:09:44.140097  303063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running
	I0920 19:09:44.140104  303063 system_pods.go:61] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:44.140108  303063 system_pods.go:61] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running
	I0920 19:09:44.140115  303063 system_pods.go:74] duration metric: took 3.921056539s to wait for pod list to return data ...
	I0920 19:09:44.140122  303063 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:09:44.143381  303063 default_sa.go:45] found service account: "default"
	I0920 19:09:44.143409  303063 default_sa.go:55] duration metric: took 3.281031ms for default service account to be created ...
	I0920 19:09:44.143422  303063 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:09:44.148161  303063 system_pods.go:86] 8 kube-system pods found
	I0920 19:09:44.148191  303063 system_pods.go:89] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running
	I0920 19:09:44.148199  303063 system_pods.go:89] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running
	I0920 19:09:44.148205  303063 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running
	I0920 19:09:44.148212  303063 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running
	I0920 19:09:44.148216  303063 system_pods.go:89] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running
	I0920 19:09:44.148221  303063 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running
	I0920 19:09:44.148230  303063 system_pods.go:89] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:44.148236  303063 system_pods.go:89] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running
	I0920 19:09:44.148248  303063 system_pods.go:126] duration metric: took 4.819429ms to wait for k8s-apps to be running ...
	I0920 19:09:44.148260  303063 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:09:44.148312  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:44.163839  303063 system_svc.go:56] duration metric: took 15.568956ms WaitForService to wait for kubelet
	I0920 19:09:44.163882  303063 kubeadm.go:582] duration metric: took 4m25.468555427s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:09:44.163911  303063 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:09:44.167622  303063 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:09:44.167656  303063 node_conditions.go:123] node cpu capacity is 2
	I0920 19:09:44.167671  303063 node_conditions.go:105] duration metric: took 3.752828ms to run NodePressure ...
	I0920 19:09:44.167690  303063 start.go:241] waiting for startup goroutines ...
	I0920 19:09:44.167700  303063 start.go:246] waiting for cluster config update ...
	I0920 19:09:44.167716  303063 start.go:255] writing updated cluster config ...
	I0920 19:09:44.168208  303063 ssh_runner.go:195] Run: rm -f paused
	I0920 19:09:44.223860  303063 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:09:44.226056  303063 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-612312" cluster and "default" namespace by default
	I0920 19:09:39.373109  302869 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002236347s
	I0920 19:09:39.373229  302869 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:09:44.375102  302869 kubeadm.go:310] [api-check] The API server is healthy after 5.001998039s
	I0920 19:09:44.405405  302869 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:09:44.428364  302869 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:09:44.470575  302869 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:09:44.470870  302869 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-339897 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:09:44.505469  302869 kubeadm.go:310] [bootstrap-token] Using token: v5zzut.gmtb3j9b0yqqwvtv
	I0920 19:09:44.507561  302869 out.go:235]   - Configuring RBAC rules ...
	I0920 19:09:44.507721  302869 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:09:44.522092  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:09:44.555238  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:09:44.559971  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:09:44.566954  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:09:44.574111  302869 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:09:44.788900  302869 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:09:45.229897  302869 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:09:45.788397  302869 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:09:45.789415  302869 kubeadm.go:310] 
	I0920 19:09:45.789504  302869 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:09:45.789516  302869 kubeadm.go:310] 
	I0920 19:09:45.789614  302869 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:09:45.789631  302869 kubeadm.go:310] 
	I0920 19:09:45.789664  302869 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:09:45.789804  302869 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:09:45.789897  302869 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:09:45.789930  302869 kubeadm.go:310] 
	I0920 19:09:45.790043  302869 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:09:45.790061  302869 kubeadm.go:310] 
	I0920 19:09:45.790130  302869 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:09:45.790145  302869 kubeadm.go:310] 
	I0920 19:09:45.790203  302869 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:09:45.790269  302869 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:09:45.790330  302869 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:09:45.790337  302869 kubeadm.go:310] 
	I0920 19:09:45.790438  302869 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:09:45.790549  302869 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:09:45.790563  302869 kubeadm.go:310] 
	I0920 19:09:45.790664  302869 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v5zzut.gmtb3j9b0yqqwvtv \
	I0920 19:09:45.790792  302869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 19:09:45.790823  302869 kubeadm.go:310] 	--control-plane 
	I0920 19:09:45.790835  302869 kubeadm.go:310] 
	I0920 19:09:45.790962  302869 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:09:45.790977  302869 kubeadm.go:310] 
	I0920 19:09:45.791045  302869 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v5zzut.gmtb3j9b0yqqwvtv \
	I0920 19:09:45.791164  302869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 19:09:45.792825  302869 kubeadm.go:310] W0920 19:09:36.880654    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:09:45.793122  302869 kubeadm.go:310] W0920 19:09:36.881516    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:09:45.793273  302869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:09:45.793317  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:09:45.793331  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:09:45.795282  302869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:09:42.464639  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:44.464714  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:45.796961  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:09:45.808972  302869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:09:45.831122  302869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:09:45.831174  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:45.831208  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-339897 minikube.k8s.io/updated_at=2024_09_20T19_09_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=embed-certs-339897 minikube.k8s.io/primary=true
	I0920 19:09:46.057677  302869 ops.go:34] apiserver oom_adj: -16
	I0920 19:09:46.057798  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:46.558670  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:47.057876  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:47.558913  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:48.057925  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:48.557985  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:49.057925  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:49.558500  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:50.058507  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:50.198032  302869 kubeadm.go:1113] duration metric: took 4.366908909s to wait for elevateKubeSystemPrivileges
	I0920 19:09:50.198074  302869 kubeadm.go:394] duration metric: took 5m1.087269263s to StartCluster
	I0920 19:09:50.198100  302869 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:09:50.198209  302869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:09:50.200736  302869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:09:50.201068  302869 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:09:50.201327  302869 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:09:50.201393  302869 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:09:50.201482  302869 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-339897"
	I0920 19:09:50.201502  302869 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-339897"
	W0920 19:09:50.201512  302869 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:09:50.201542  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.202007  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202050  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.202261  302869 addons.go:69] Setting default-storageclass=true in profile "embed-certs-339897"
	I0920 19:09:50.202285  302869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-339897"
	I0920 19:09:50.202285  302869 addons.go:69] Setting metrics-server=true in profile "embed-certs-339897"
	I0920 19:09:50.202311  302869 addons.go:234] Setting addon metrics-server=true in "embed-certs-339897"
	W0920 19:09:50.202319  302869 addons.go:243] addon metrics-server should already be in state true
	I0920 19:09:50.202349  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.202688  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202752  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.202755  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202793  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.203329  302869 out.go:177] * Verifying Kubernetes components...
	I0920 19:09:50.204655  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:09:50.224081  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46289
	I0920 19:09:50.224334  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45801
	I0920 19:09:50.224337  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0920 19:09:50.224579  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.224941  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.225039  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.225214  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225231  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.225643  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.225682  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225699  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.225798  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225818  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.226018  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.226080  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.226564  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.226594  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.226777  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.227444  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.227494  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.229747  302869 addons.go:234] Setting addon default-storageclass=true in "embed-certs-339897"
	W0920 19:09:50.229771  302869 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:09:50.229803  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.230208  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.230261  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.243865  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I0920 19:09:50.244292  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.244828  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.244851  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.245080  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0920 19:09:50.245252  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.245714  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.245810  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.246303  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.246323  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.246661  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.246806  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.248050  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.248671  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.250223  302869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:09:50.250319  302869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:09:46.963562  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:48.965266  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:50.250485  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38237
	I0920 19:09:50.250954  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.251418  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.251435  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.251535  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:09:50.251556  302869 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:09:50.251594  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.251680  302869 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:09:50.251693  302869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:09:50.251706  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.251889  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.252452  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.252502  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.255422  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.255692  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.255902  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.255928  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.256372  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.256396  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.256442  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.256663  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.256697  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.256840  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.256868  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.257066  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.257089  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.257268  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.272424  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I0920 19:09:50.273107  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.273729  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.273746  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.274208  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.274402  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.276189  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.276384  302869 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:09:50.276399  302869 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:09:50.276417  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.279319  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.279718  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.279747  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.279850  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.280044  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.280305  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.280481  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.407262  302869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:09:50.455491  302869 node_ready.go:35] waiting up to 6m0s for node "embed-certs-339897" to be "Ready" ...
	I0920 19:09:50.503634  302869 node_ready.go:49] node "embed-certs-339897" has status "Ready":"True"
	I0920 19:09:50.503663  302869 node_ready.go:38] duration metric: took 48.13478ms for node "embed-certs-339897" to be "Ready" ...
	I0920 19:09:50.503672  302869 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:50.532327  302869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:50.589446  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:09:50.589482  302869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:09:50.613277  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:09:50.619161  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:09:50.662197  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:09:50.662232  302869 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:09:50.753073  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:09:50.753106  302869 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:09:50.842679  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:09:51.790932  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171721983s)
	I0920 19:09:51.790997  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791012  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791029  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.177708427s)
	I0920 19:09:51.791073  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791089  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791380  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.791438  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.791444  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.791483  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791380  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.791527  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.791541  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791556  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791416  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.791493  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.793128  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.793159  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.793177  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.793149  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.793148  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.793208  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.820906  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.820939  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.821290  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.821312  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.003182  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.160452395s)
	I0920 19:09:52.003247  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:52.003261  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:52.003593  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:52.003600  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:52.003622  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.003632  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:52.003640  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:52.003985  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:52.004003  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.004017  302869 addons.go:475] Verifying addon metrics-server=true in "embed-certs-339897"
	I0920 19:09:52.006444  302869 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 19:09:52.008313  302869 addons.go:510] duration metric: took 1.806914162s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 19:09:52.539578  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:53.539999  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:53.540026  302869 pod_ready.go:82] duration metric: took 3.007669334s for pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:53.540036  302869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:51.463340  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:53.963461  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:55.547997  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:57.552686  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.552714  302869 pod_ready.go:82] duration metric: took 4.01267227s for pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.552724  302869 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.560885  302869 pod_ready.go:93] pod "etcd-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.560910  302869 pod_ready.go:82] duration metric: took 8.179457ms for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.560919  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.577414  302869 pod_ready.go:93] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.577441  302869 pod_ready.go:82] duration metric: took 16.515029ms for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.577451  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.588547  302869 pod_ready.go:93] pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.588574  302869 pod_ready.go:82] duration metric: took 11.116334ms for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.588583  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-whcbh" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.594919  302869 pod_ready.go:93] pod "kube-proxy-whcbh" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.594942  302869 pod_ready.go:82] duration metric: took 6.35266ms for pod "kube-proxy-whcbh" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.594951  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.943559  302869 pod_ready.go:93] pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.943585  302869 pod_ready.go:82] duration metric: took 348.626555ms for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.943592  302869 pod_ready.go:39] duration metric: took 7.439908161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:57.943609  302869 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:09:57.943662  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:57.959537  302869 api_server.go:72] duration metric: took 7.758426976s to wait for apiserver process to appear ...
	I0920 19:09:57.959567  302869 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:09:57.959594  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:09:57.964316  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 200:
	ok
	I0920 19:09:57.965668  302869 api_server.go:141] control plane version: v1.31.1
	I0920 19:09:57.965690  302869 api_server.go:131] duration metric: took 6.115168ms to wait for apiserver health ...
	I0920 19:09:57.965697  302869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:09:58.148306  302869 system_pods.go:59] 9 kube-system pods found
	I0920 19:09:58.148339  302869 system_pods.go:61] "coredns-7c65d6cfc9-2zlww" [5eb78763-7160-4ae9-80c3-87a82a6dc992] Running
	I0920 19:09:58.148345  302869 system_pods.go:61] "coredns-7c65d6cfc9-7fxdr" [85a441e8-39b0-4623-a7bd-eebbd1574f20] Running
	I0920 19:09:58.148349  302869 system_pods.go:61] "etcd-embed-certs-339897" [150a2276-3896-498e-89f7-44cf4554da69] Running
	I0920 19:09:58.148352  302869 system_pods.go:61] "kube-apiserver-embed-certs-339897" [396520a3-2567-4267-852d-9f9525dd5e01] Running
	I0920 19:09:58.148356  302869 system_pods.go:61] "kube-controller-manager-embed-certs-339897" [7f64ad97-3230-4cf5-92ad-cf58ef88a2b0] Running
	I0920 19:09:58.148359  302869 system_pods.go:61] "kube-proxy-whcbh" [3a2dbb60-1a51-4874-98b8-75d1a35b0512] Running
	I0920 19:09:58.148361  302869 system_pods.go:61] "kube-scheduler-embed-certs-339897" [31214783-f8cf-46c6-a305-fde7692dfc72] Running
	I0920 19:09:58.148367  302869 system_pods.go:61] "metrics-server-6867b74b74-tw9fh" [8366591d-8916-4b9f-be8a-64ddc185f576] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:58.148371  302869 system_pods.go:61] "storage-provisioner" [8bcc482a-6905-436a-8d90-7eee9ba18f8b] Running
	I0920 19:09:58.148381  302869 system_pods.go:74] duration metric: took 182.677921ms to wait for pod list to return data ...
	I0920 19:09:58.148387  302869 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:09:58.344318  302869 default_sa.go:45] found service account: "default"
	I0920 19:09:58.344346  302869 default_sa.go:55] duration metric: took 195.952788ms for default service account to be created ...
	I0920 19:09:58.344357  302869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:09:58.547996  302869 system_pods.go:86] 9 kube-system pods found
	I0920 19:09:58.548034  302869 system_pods.go:89] "coredns-7c65d6cfc9-2zlww" [5eb78763-7160-4ae9-80c3-87a82a6dc992] Running
	I0920 19:09:58.548043  302869 system_pods.go:89] "coredns-7c65d6cfc9-7fxdr" [85a441e8-39b0-4623-a7bd-eebbd1574f20] Running
	I0920 19:09:58.548048  302869 system_pods.go:89] "etcd-embed-certs-339897" [150a2276-3896-498e-89f7-44cf4554da69] Running
	I0920 19:09:58.548054  302869 system_pods.go:89] "kube-apiserver-embed-certs-339897" [396520a3-2567-4267-852d-9f9525dd5e01] Running
	I0920 19:09:58.548060  302869 system_pods.go:89] "kube-controller-manager-embed-certs-339897" [7f64ad97-3230-4cf5-92ad-cf58ef88a2b0] Running
	I0920 19:09:58.548066  302869 system_pods.go:89] "kube-proxy-whcbh" [3a2dbb60-1a51-4874-98b8-75d1a35b0512] Running
	I0920 19:09:58.548070  302869 system_pods.go:89] "kube-scheduler-embed-certs-339897" [31214783-f8cf-46c6-a305-fde7692dfc72] Running
	I0920 19:09:58.548079  302869 system_pods.go:89] "metrics-server-6867b74b74-tw9fh" [8366591d-8916-4b9f-be8a-64ddc185f576] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:58.548085  302869 system_pods.go:89] "storage-provisioner" [8bcc482a-6905-436a-8d90-7eee9ba18f8b] Running
	I0920 19:09:58.548099  302869 system_pods.go:126] duration metric: took 203.735171ms to wait for k8s-apps to be running ...
	I0920 19:09:58.548108  302869 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:09:58.548165  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:58.563235  302869 system_svc.go:56] duration metric: took 15.107997ms WaitForService to wait for kubelet
	I0920 19:09:58.563274  302869 kubeadm.go:582] duration metric: took 8.362165276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:09:58.563299  302869 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:09:58.744093  302869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:09:58.744155  302869 node_conditions.go:123] node cpu capacity is 2
	I0920 19:09:58.744171  302869 node_conditions.go:105] duration metric: took 180.864643ms to run NodePressure ...
	I0920 19:09:58.744186  302869 start.go:241] waiting for startup goroutines ...
	I0920 19:09:58.744196  302869 start.go:246] waiting for cluster config update ...
	I0920 19:09:58.744220  302869 start.go:255] writing updated cluster config ...
	I0920 19:09:58.744526  302869 ssh_runner.go:195] Run: rm -f paused
	I0920 19:09:58.794946  302869 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:09:58.797418  302869 out.go:177] * Done! kubectl is now configured to use "embed-certs-339897" cluster and "default" namespace by default
	I0920 19:09:56.464024  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:58.464282  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:00.963419  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:02.963506  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:04.963804  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:07.463546  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:09.962855  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:11.963447  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:13.964915  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:17.296411  303486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:10:17.296525  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:17.296765  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:16.462968  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:18.963906  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:22.297630  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:22.297923  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:21.463201  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:22.457112  302538 pod_ready.go:82] duration metric: took 4m0.000881628s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" ...
	E0920 19:10:22.457161  302538 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 19:10:22.457180  302538 pod_ready.go:39] duration metric: took 4m14.047738931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:10:22.457208  302538 kubeadm.go:597] duration metric: took 4m21.028566787s to restartPrimaryControlPlane
	W0920 19:10:22.457265  302538 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:10:22.457291  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:10:32.298239  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:32.298525  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:48.632052  302538 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.17473972s)
	I0920 19:10:48.632143  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:10:48.648205  302538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:10:48.658969  302538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:10:48.668954  302538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:10:48.668981  302538 kubeadm.go:157] found existing configuration files:
	
	I0920 19:10:48.669035  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:10:48.678138  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:10:48.678229  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:10:48.687960  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:10:48.697578  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:10:48.697644  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:10:48.707573  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:10:48.717059  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:10:48.717123  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:10:48.727642  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:10:48.737599  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:10:48.737681  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:10:48.749542  302538 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:10:48.795278  302538 kubeadm.go:310] W0920 19:10:48.780113    2961 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:10:48.796096  302538 kubeadm.go:310] W0920 19:10:48.780928    2961 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:10:48.910958  302538 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:10:52.299257  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:52.299561  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:56.716717  302538 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:10:56.716805  302538 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:10:56.716938  302538 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:10:56.717078  302538 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:10:56.717170  302538 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:10:56.717225  302538 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:10:56.719086  302538 out.go:235]   - Generating certificates and keys ...
	I0920 19:10:56.719199  302538 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:10:56.719286  302538 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:10:56.719407  302538 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:10:56.719505  302538 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:10:56.719624  302538 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:10:56.719720  302538 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:10:56.719811  302538 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:10:56.719928  302538 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:10:56.720049  302538 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:10:56.720154  302538 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:10:56.720224  302538 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:10:56.720287  302538 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:10:56.720334  302538 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:10:56.720386  302538 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:10:56.720432  302538 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:10:56.720486  302538 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:10:56.720533  302538 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:10:56.720606  302538 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:10:56.720701  302538 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:10:56.722504  302538 out.go:235]   - Booting up control plane ...
	I0920 19:10:56.722620  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:10:56.722748  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:10:56.722872  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:10:56.723020  302538 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:10:56.723105  302538 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:10:56.723148  302538 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:10:56.723337  302538 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:10:56.723455  302538 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:10:56.723515  302538 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.448196ms
	I0920 19:10:56.723612  302538 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:10:56.723706  302538 kubeadm.go:310] [api-check] The API server is healthy after 5.001495273s
	I0920 19:10:56.723888  302538 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:10:56.724046  302538 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:10:56.724131  302538 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:10:56.724406  302538 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-037711 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:10:56.724464  302538 kubeadm.go:310] [bootstrap-token] Using token: 2hi1gl.ipidz4nvj8gip8th
	I0920 19:10:56.726099  302538 out.go:235]   - Configuring RBAC rules ...
	I0920 19:10:56.726212  302538 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:10:56.726315  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:10:56.726479  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:10:56.726641  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:10:56.726794  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:10:56.726926  302538 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:10:56.727082  302538 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:10:56.727154  302538 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:10:56.727202  302538 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:10:56.727209  302538 kubeadm.go:310] 
	I0920 19:10:56.727261  302538 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:10:56.727267  302538 kubeadm.go:310] 
	I0920 19:10:56.727363  302538 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:10:56.727383  302538 kubeadm.go:310] 
	I0920 19:10:56.727424  302538 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:10:56.727507  302538 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:10:56.727607  302538 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:10:56.727620  302538 kubeadm.go:310] 
	I0920 19:10:56.727699  302538 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:10:56.727712  302538 kubeadm.go:310] 
	I0920 19:10:56.727775  302538 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:10:56.727790  302538 kubeadm.go:310] 
	I0920 19:10:56.727865  302538 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:10:56.727969  302538 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:10:56.728032  302538 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:10:56.728038  302538 kubeadm.go:310] 
	I0920 19:10:56.728106  302538 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:10:56.728171  302538 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:10:56.728177  302538 kubeadm.go:310] 
	I0920 19:10:56.728271  302538 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2hi1gl.ipidz4nvj8gip8th \
	I0920 19:10:56.728406  302538 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 19:10:56.728438  302538 kubeadm.go:310] 	--control-plane 
	I0920 19:10:56.728451  302538 kubeadm.go:310] 
	I0920 19:10:56.728571  302538 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:10:56.728577  302538 kubeadm.go:310] 
	I0920 19:10:56.728675  302538 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2hi1gl.ipidz4nvj8gip8th \
	I0920 19:10:56.728823  302538 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 19:10:56.728837  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:10:56.728843  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:10:56.730851  302538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:10:56.732462  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:10:56.745326  302538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:10:56.764458  302538 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:10:56.764563  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:56.764620  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-037711 minikube.k8s.io/updated_at=2024_09_20T19_10_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=no-preload-037711 minikube.k8s.io/primary=true
	I0920 19:10:56.792026  302538 ops.go:34] apiserver oom_adj: -16
	I0920 19:10:56.976178  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:57.477172  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:57.977076  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:58.476357  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:58.977162  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:59.476924  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:59.976506  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:11:00.080925  302538 kubeadm.go:1113] duration metric: took 3.316440483s to wait for elevateKubeSystemPrivileges
	I0920 19:11:00.080968  302538 kubeadm.go:394] duration metric: took 4m58.701872852s to StartCluster
	I0920 19:11:00.080994  302538 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:11:00.081106  302538 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:11:00.082815  302538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:11:00.083064  302538 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:11:00.083137  302538 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:11:00.083243  302538 addons.go:69] Setting storage-provisioner=true in profile "no-preload-037711"
	I0920 19:11:00.083263  302538 addons.go:234] Setting addon storage-provisioner=true in "no-preload-037711"
	W0920 19:11:00.083272  302538 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:11:00.083263  302538 addons.go:69] Setting default-storageclass=true in profile "no-preload-037711"
	I0920 19:11:00.083299  302538 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-037711"
	I0920 19:11:00.083308  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.083304  302538 addons.go:69] Setting metrics-server=true in profile "no-preload-037711"
	I0920 19:11:00.083342  302538 addons.go:234] Setting addon metrics-server=true in "no-preload-037711"
	W0920 19:11:00.083354  302538 addons.go:243] addon metrics-server should already be in state true
	I0920 19:11:00.083385  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.083315  302538 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:11:00.083667  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083709  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083715  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.083750  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.083864  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083912  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.084969  302538 out.go:177] * Verifying Kubernetes components...
	I0920 19:11:00.086652  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:11:00.102128  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0920 19:11:00.102362  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33853
	I0920 19:11:00.102750  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0920 19:11:00.102879  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103041  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103431  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103635  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.103651  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.103767  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.103783  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.104022  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.104040  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.104042  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104180  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104383  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104394  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.104842  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.104881  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.104927  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.104963  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.107816  302538 addons.go:234] Setting addon default-storageclass=true in "no-preload-037711"
	W0920 19:11:00.107836  302538 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:11:00.107865  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.108193  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.108236  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.121661  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0920 19:11:00.122693  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.123520  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.123642  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.124299  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.124530  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.125624  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I0920 19:11:00.126343  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.126439  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I0920 19:11:00.126868  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.126947  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.127277  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.127302  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.127572  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.127599  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.127646  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.127902  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.128095  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.128318  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.128360  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.129099  302538 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:11:00.129788  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.130688  302538 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:11:00.130713  302538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:11:00.130732  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.131393  302538 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:11:00.132404  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:11:00.132432  302538 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:11:00.132454  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.134112  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.134627  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.134690  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.135041  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.135215  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.135448  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.135550  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.136315  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.136816  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.136849  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.137011  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.137231  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.137409  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.137589  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.166369  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0920 19:11:00.166884  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.167464  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.167483  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.167850  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.168037  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.169668  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.169875  302538 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:11:00.169891  302538 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:11:00.169925  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.172907  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.173383  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.173416  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.173577  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.173820  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.174010  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.174212  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.275468  302538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:11:00.290839  302538 node_ready.go:35] waiting up to 6m0s for node "no-preload-037711" to be "Ready" ...
	I0920 19:11:00.300222  302538 node_ready.go:49] node "no-preload-037711" has status "Ready":"True"
	I0920 19:11:00.300244  302538 node_ready.go:38] duration metric: took 9.368069ms for node "no-preload-037711" to be "Ready" ...
	I0920 19:11:00.300253  302538 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:11:00.306099  302538 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:00.364927  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:11:00.364956  302538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:11:00.382910  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:11:00.392581  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:11:00.392611  302538 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:11:00.404275  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:11:00.442677  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:11:00.442707  302538 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:11:00.500976  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:11:01.337157  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337196  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337169  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337265  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337558  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.337573  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.337600  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337613  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.337641  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337649  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337685  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337702  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.337711  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337720  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337961  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337978  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.338064  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.338114  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.338133  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.395956  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.395989  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.396327  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.396355  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580133  302538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.079115769s)
	I0920 19:11:01.580188  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.580203  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.580548  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.580568  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580578  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.580586  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.580817  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.580842  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580853  302538 addons.go:475] Verifying addon metrics-server=true in "no-preload-037711"
	I0920 19:11:01.582786  302538 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 19:11:01.584283  302538 addons.go:510] duration metric: took 1.501156808s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 19:11:02.314471  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:04.817174  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:07.312399  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:07.812969  302538 pod_ready.go:93] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:07.812999  302538 pod_ready.go:82] duration metric: took 7.506877081s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.813008  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.818172  302538 pod_ready.go:93] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:07.818200  302538 pod_ready.go:82] duration metric: took 5.184579ms for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.818211  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:09.825772  302538 pod_ready.go:103] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:10.325453  302538 pod_ready.go:93] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:10.325479  302538 pod_ready.go:82] duration metric: took 2.507262085s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.325489  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.331181  302538 pod_ready.go:93] pod "kube-scheduler-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:10.331208  302538 pod_ready.go:82] duration metric: took 5.711573ms for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.331216  302538 pod_ready.go:39] duration metric: took 10.030954081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:11:10.331233  302538 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:11:10.331286  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:11:10.348104  302538 api_server.go:72] duration metric: took 10.265008499s to wait for apiserver process to appear ...
	I0920 19:11:10.348135  302538 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:11:10.348157  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:11:10.352242  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0920 19:11:10.353228  302538 api_server.go:141] control plane version: v1.31.1
	I0920 19:11:10.353249  302538 api_server.go:131] duration metric: took 5.107446ms to wait for apiserver health ...
	I0920 19:11:10.353257  302538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:11:10.358560  302538 system_pods.go:59] 9 kube-system pods found
	I0920 19:11:10.358588  302538 system_pods.go:61] "coredns-7c65d6cfc9-gdfh9" [61c6d6d8-62b9-4db3-a3c3-fd0daec82a9f] Running
	I0920 19:11:10.358593  302538 system_pods.go:61] "coredns-7c65d6cfc9-h84nm" [6ada3ba7-1ccd-474b-850b-c00a77dfbb92] Running
	I0920 19:11:10.358597  302538 system_pods.go:61] "etcd-no-preload-037711" [9ace2dcd-0562-46d5-99be-65be4ea053d9] Running
	I0920 19:11:10.358601  302538 system_pods.go:61] "kube-apiserver-no-preload-037711" [1dbfa130-d2dd-420d-a32c-1e82b535c112] Running
	I0920 19:11:10.358604  302538 system_pods.go:61] "kube-controller-manager-no-preload-037711" [56462390-dedd-4281-ac85-2671f7a10cb1] Running
	I0920 19:11:10.358607  302538 system_pods.go:61] "kube-proxy-bvfqh" [2170ef3f-58f0-4d42-9f15-d9c952e0e2ec] Running
	I0920 19:11:10.358610  302538 system_pods.go:61] "kube-scheduler-no-preload-037711" [e996ce53-7ee6-4d1d-bd0b-8188d76966b9] Running
	I0920 19:11:10.358617  302538 system_pods.go:61] "metrics-server-6867b74b74-rpfqm" [ba7c8518-6c3e-4751-a9a5-29c77990a29c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:11:10.358620  302538 system_pods.go:61] "storage-provisioner" [e7f05c0a-c6be-4e68-959e-966c17c9cc5e] Running
	I0920 19:11:10.358629  302538 system_pods.go:74] duration metric: took 5.365343ms to wait for pod list to return data ...
	I0920 19:11:10.358635  302538 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:11:10.361229  302538 default_sa.go:45] found service account: "default"
	I0920 19:11:10.361255  302538 default_sa.go:55] duration metric: took 2.612292ms for default service account to be created ...
	I0920 19:11:10.361264  302538 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:11:10.367188  302538 system_pods.go:86] 9 kube-system pods found
	I0920 19:11:10.367221  302538 system_pods.go:89] "coredns-7c65d6cfc9-gdfh9" [61c6d6d8-62b9-4db3-a3c3-fd0daec82a9f] Running
	I0920 19:11:10.367229  302538 system_pods.go:89] "coredns-7c65d6cfc9-h84nm" [6ada3ba7-1ccd-474b-850b-c00a77dfbb92] Running
	I0920 19:11:10.367235  302538 system_pods.go:89] "etcd-no-preload-037711" [9ace2dcd-0562-46d5-99be-65be4ea053d9] Running
	I0920 19:11:10.367241  302538 system_pods.go:89] "kube-apiserver-no-preload-037711" [1dbfa130-d2dd-420d-a32c-1e82b535c112] Running
	I0920 19:11:10.367248  302538 system_pods.go:89] "kube-controller-manager-no-preload-037711" [56462390-dedd-4281-ac85-2671f7a10cb1] Running
	I0920 19:11:10.367254  302538 system_pods.go:89] "kube-proxy-bvfqh" [2170ef3f-58f0-4d42-9f15-d9c952e0e2ec] Running
	I0920 19:11:10.367260  302538 system_pods.go:89] "kube-scheduler-no-preload-037711" [e996ce53-7ee6-4d1d-bd0b-8188d76966b9] Running
	I0920 19:11:10.367267  302538 system_pods.go:89] "metrics-server-6867b74b74-rpfqm" [ba7c8518-6c3e-4751-a9a5-29c77990a29c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:11:10.367273  302538 system_pods.go:89] "storage-provisioner" [e7f05c0a-c6be-4e68-959e-966c17c9cc5e] Running
	I0920 19:11:10.367283  302538 system_pods.go:126] duration metric: took 6.01247ms to wait for k8s-apps to be running ...
	I0920 19:11:10.367292  302538 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:11:10.367354  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:11:10.381551  302538 system_svc.go:56] duration metric: took 14.250301ms WaitForService to wait for kubelet
	I0920 19:11:10.381582  302538 kubeadm.go:582] duration metric: took 10.298492318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:11:10.381601  302538 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:11:10.385405  302538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:11:10.385442  302538 node_conditions.go:123] node cpu capacity is 2
	I0920 19:11:10.385455  302538 node_conditions.go:105] duration metric: took 3.849463ms to run NodePressure ...
	I0920 19:11:10.385468  302538 start.go:241] waiting for startup goroutines ...
	I0920 19:11:10.385474  302538 start.go:246] waiting for cluster config update ...
	I0920 19:11:10.385485  302538 start.go:255] writing updated cluster config ...
	I0920 19:11:10.385786  302538 ssh_runner.go:195] Run: rm -f paused
	I0920 19:11:10.436362  302538 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:11:10.438538  302538 out.go:177] * Done! kubectl is now configured to use "no-preload-037711" cluster and "default" namespace by default
	I0920 19:11:32.301334  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:11:32.302020  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:11:32.302048  303486 kubeadm.go:310] 
	I0920 19:11:32.302147  303486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:11:32.302252  303486 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:11:32.302279  303486 kubeadm.go:310] 
	I0920 19:11:32.302366  303486 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:11:32.302453  303486 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:11:32.302713  303486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:11:32.302731  303486 kubeadm.go:310] 
	I0920 19:11:32.303023  303486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:11:32.303099  303486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:11:32.303200  303486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:11:32.303232  303486 kubeadm.go:310] 
	I0920 19:11:32.303438  303486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:11:32.303669  303486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:11:32.303699  303486 kubeadm.go:310] 
	I0920 19:11:32.303965  303486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:11:32.304199  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:11:32.304410  303486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:11:32.304577  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:11:32.304624  303486 kubeadm.go:310] 
	I0920 19:11:32.305105  303486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:11:32.305465  303486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:11:32.305655  303486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 19:11:32.305713  303486 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 19:11:32.305758  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:11:32.760742  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:11:32.775675  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:11:32.785785  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:11:32.785806  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:11:32.785854  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:11:32.795133  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:11:32.795210  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:11:32.805681  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:11:32.815299  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:11:32.815362  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:11:32.827215  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:11:32.836597  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:11:32.836682  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:11:32.846621  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:11:32.855610  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:11:32.855675  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:11:32.866824  303486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:11:33.103745  303486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:13:29.101212  303486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:13:29.101347  303486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 19:13:29.103031  303486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:13:29.103142  303486 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:13:29.103216  303486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:13:29.103318  303486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:13:29.103437  303486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:13:29.103507  303486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:13:29.105521  303486 out.go:235]   - Generating certificates and keys ...
	I0920 19:13:29.105622  303486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:13:29.105704  303486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:13:29.105820  303486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:13:29.105955  303486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:13:29.106058  303486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:13:29.106132  303486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:13:29.106219  303486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:13:29.106318  303486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:13:29.106430  303486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:13:29.106548  303486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:13:29.106611  303486 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:13:29.106699  303486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:13:29.106766  303486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:13:29.106844  303486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:13:29.106935  303486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:13:29.107011  303486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:13:29.107117  303486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:13:29.107223  303486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:13:29.107289  303486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:13:29.107376  303486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:13:29.108804  303486 out.go:235]   - Booting up control plane ...
	I0920 19:13:29.108952  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:13:29.109021  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:13:29.109082  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:13:29.109166  303486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:13:29.109313  303486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:13:29.109359  303486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:13:29.109462  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.109630  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.109699  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.109878  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.109966  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110133  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110213  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110382  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110441  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110606  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110616  303486 kubeadm.go:310] 
	I0920 19:13:29.110661  303486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:13:29.110699  303486 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:13:29.110706  303486 kubeadm.go:310] 
	I0920 19:13:29.110739  303486 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:13:29.110769  303486 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:13:29.110866  303486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:13:29.110875  303486 kubeadm.go:310] 
	I0920 19:13:29.110969  303486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:13:29.111003  303486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:13:29.111031  303486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:13:29.111037  303486 kubeadm.go:310] 
	I0920 19:13:29.111141  303486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:13:29.111224  303486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:13:29.111231  303486 kubeadm.go:310] 
	I0920 19:13:29.111327  303486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:13:29.111407  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:13:29.111481  303486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:13:29.111542  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:13:29.111610  303486 kubeadm.go:394] duration metric: took 7m56.768319159s to StartCluster
	I0920 19:13:29.111640  303486 kubeadm.go:310] 
	I0920 19:13:29.111664  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:13:29.111734  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:13:29.157817  303486 cri.go:89] found id: ""
	I0920 19:13:29.157849  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.157859  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:13:29.157867  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:13:29.157950  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:13:29.192130  303486 cri.go:89] found id: ""
	I0920 19:13:29.192164  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.192179  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:13:29.192187  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:13:29.192243  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:13:29.227594  303486 cri.go:89] found id: ""
	I0920 19:13:29.227631  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.227642  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:13:29.227651  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:13:29.227724  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:13:29.261948  303486 cri.go:89] found id: ""
	I0920 19:13:29.261981  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.261996  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:13:29.262004  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:13:29.262072  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:13:29.295148  303486 cri.go:89] found id: ""
	I0920 19:13:29.295181  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.295191  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:13:29.295200  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:13:29.295270  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:13:29.328094  303486 cri.go:89] found id: ""
	I0920 19:13:29.328127  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.328135  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:13:29.328142  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:13:29.328194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:13:29.368830  303486 cri.go:89] found id: ""
	I0920 19:13:29.368870  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.368878  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:13:29.368885  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:13:29.368947  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:13:29.420051  303486 cri.go:89] found id: ""
	I0920 19:13:29.420081  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.420091  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:13:29.420106  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:13:29.420123  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:13:29.498322  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:13:29.498350  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:13:29.498364  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:13:29.601796  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:13:29.601842  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:13:29.644325  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:13:29.644368  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:13:29.692691  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:13:29.692736  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0920 19:13:29.707508  303486 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 19:13:29.707577  303486 out.go:270] * 
	W0920 19:13:29.707646  303486 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:13:29.707664  303486 out.go:270] * 
	W0920 19:13:29.708560  303486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 19:13:29.711313  303486 out.go:201] 
	W0920 19:13:29.712520  303486 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:13:29.712553  303486 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 19:13:29.712576  303486 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 19:13:29.713832  303486 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.764185421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859611764163926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67585de7-6d25-43a2-b47b-d6e6b8eceb65 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.764746827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83163c50-201c-4c63-b9c1-4bd9c7c2f446 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.764816083Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83163c50-201c-4c63-b9c1-4bd9c7c2f446 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.764858541Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=83163c50-201c-4c63-b9c1-4bd9c7c2f446 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.796802610Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9b84507-b088-48b1-9c95-0f609e062065 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.796902664Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9b84507-b088-48b1-9c95-0f609e062065 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.798081240Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5a89fe7-f2e4-4529-843a-9d0a59d84c77 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.798513258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859611798491858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5a89fe7-f2e4-4529-843a-9d0a59d84c77 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.798937796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00d9d8be-38ca-4738-874c-462637239df1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.798984566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00d9d8be-38ca-4738-874c-462637239df1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.799027748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=00d9d8be-38ca-4738-874c-462637239df1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.830704467Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ca35fb5-6a20-45c2-841d-2857abb1b4f8 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.830786237Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ca35fb5-6a20-45c2-841d-2857abb1b4f8 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.831769935Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83d102e4-b737-4a5c-b45a-4f257b58f609 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.832156430Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859611832133383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83d102e4-b737-4a5c-b45a-4f257b58f609 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.832718434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4dac659-270d-486d-8d4d-3ae3caffe4ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.832795689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4dac659-270d-486d-8d4d-3ae3caffe4ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.832838569Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a4dac659-270d-486d-8d4d-3ae3caffe4ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.864717614Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e6ea49e-32e2-4a18-91eb-f6fba4d72abf name=/runtime.v1.RuntimeService/Version
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.864802803Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e6ea49e-32e2-4a18-91eb-f6fba4d72abf name=/runtime.v1.RuntimeService/Version
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.865665377Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=404ef41d-6dec-46e0-9a68-7715ff0b8ab8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.866111693Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859611866088177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=404ef41d-6dec-46e0-9a68-7715ff0b8ab8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.866577343Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4e1581c-b053-4174-a401-6428f248daf5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.866630745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4e1581c-b053-4174-a401-6428f248daf5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:13:31 old-k8s-version-425599 crio[625]: time="2024-09-20 19:13:31.866666981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a4e1581c-b053-4174-a401-6428f248daf5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep20 19:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051564] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038083] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.892466] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.024288] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.561052] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.228283] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.074163] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.093321] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.193588] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.158367] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.273001] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +6.667482] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.066383] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.180794] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[ +11.395339] kauditd_printk_skb: 46 callbacks suppressed
	[Sep20 19:09] systemd-fstab-generator[5034]: Ignoring "noauto" option for root device
	[Sep20 19:11] systemd-fstab-generator[5316]: Ignoring "noauto" option for root device
	[  +0.064997] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:13:32 up 8 min,  0 users,  load average: 0.00, 0.07, 0.05
	Linux old-k8s-version-425599 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 20 19:13:29 old-k8s-version-425599 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Sep 20 19:13:29 old-k8s-version-425599 kubelet[5490]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Sep 20 19:13:29 old-k8s-version-425599 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Sep 20 19:13:29 old-k8s-version-425599 kubelet[5490]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000545ef0)
	Sep 20 19:13:29 old-k8s-version-425599 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Sep 20 19:13:29 old-k8s-version-425599 kubelet[5490]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bc1ef0, 0x4f0ac20, 0xc000114a00, 0x1, 0xc0000be060)
	Sep 20 19:13:29 old-k8s-version-425599 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Sep 20 19:13:29 old-k8s-version-425599 kubelet[5490]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002547e0, 0xc0000be060)
	Sep 20 19:13:29 old-k8s-version-425599 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 20 19:13:29 old-k8s-version-425599 kubelet[5490]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 20 19:13:29 old-k8s-version-425599 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 20 19:13:29 old-k8s-version-425599 kubelet[5490]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a1ecd0, 0xc0009b78e0)
	Sep 20 19:13:29 old-k8s-version-425599 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 20 19:13:29 old-k8s-version-425599 kubelet[5490]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 20 19:13:29 old-k8s-version-425599 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 20 19:13:29 old-k8s-version-425599 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 20 19:13:29 old-k8s-version-425599 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 20 19:13:30 old-k8s-version-425599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 20 19:13:30 old-k8s-version-425599 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 20 19:13:30 old-k8s-version-425599 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 20 19:13:30 old-k8s-version-425599 kubelet[5558]: I0920 19:13:30.166477    5558 server.go:416] Version: v1.20.0
	Sep 20 19:13:30 old-k8s-version-425599 kubelet[5558]: I0920 19:13:30.167156    5558 server.go:837] Client rotation is on, will bootstrap in background
	Sep 20 19:13:30 old-k8s-version-425599 kubelet[5558]: I0920 19:13:30.170352    5558 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 20 19:13:30 old-k8s-version-425599 kubelet[5558]: I0920 19:13:30.171909    5558 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 20 19:13:30 old-k8s-version-425599 kubelet[5558]: W0920 19:13:30.172080    5558 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-425599 -n old-k8s-version-425599
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-425599 -n old-k8s-version-425599: exit status 2 (238.39255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-425599" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (724.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0920 19:09:48.376659  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-612312 -n default-k8s-diff-port-612312
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-20 19:18:44.816718385 +0000 UTC m=+6187.477165028
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612312 -n default-k8s-diff-port-612312
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-612312 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-612312 logs -n 25: (2.21245162s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-793540 sudo cat                             | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo find                            | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo crio                            | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-793540                                      | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-896665 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | disable-driver-mounts-896665                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:57 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-037711             | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-339897            | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-612312  | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-037711                  | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC | 20 Sep 24 19:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-339897                 | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-425599        | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612312       | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-425599             | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:01:28
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:01:28.948776  303486 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:01:28.948894  303486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:01:28.948900  303486 out.go:358] Setting ErrFile to fd 2...
	I0920 19:01:28.948906  303486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:01:28.949090  303486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 19:01:28.949637  303486 out.go:352] Setting JSON to false
	I0920 19:01:28.950705  303486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9832,"bootTime":1726849057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:01:28.950809  303486 start.go:139] virtualization: kvm guest
	I0920 19:01:28.953226  303486 out.go:177] * [old-k8s-version-425599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:01:28.955013  303486 notify.go:220] Checking for updates...
	I0920 19:01:28.955065  303486 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 19:01:28.956932  303486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:01:28.959076  303486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:01:28.961116  303486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 19:01:28.963396  303486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:01:28.965428  303486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:01:28.967688  303486 config.go:182] Loaded profile config "old-k8s-version-425599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:01:28.968112  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:01:28.968175  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:01:28.984002  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
	I0920 19:01:28.984552  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:01:28.985260  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:01:28.985291  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:01:28.985715  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:01:28.985972  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:01:28.988070  303486 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 19:01:28.989565  303486 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:01:28.990007  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:01:28.990079  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:01:29.006020  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34059
	I0920 19:01:29.006492  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:01:29.007046  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:01:29.007078  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:01:29.007441  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:01:29.007706  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:01:29.049785  303486 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 19:01:29.051185  303486 start.go:297] selected driver: kvm2
	I0920 19:01:29.051206  303486 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:01:29.051323  303486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:01:29.052030  303486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:01:29.052131  303486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:01:29.068826  303486 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:01:29.069232  303486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:01:29.069262  303486 cni.go:84] Creating CNI manager for ""
	I0920 19:01:29.069297  303486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:01:29.069333  303486 start.go:340] cluster config:
	{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:01:29.069439  303486 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:01:29.071617  303486 out.go:177] * Starting "old-k8s-version-425599" primary control-plane node in "old-k8s-version-425599" cluster
	I0920 19:01:27.086248  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:29.073133  303486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:01:29.073174  303486 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 19:01:29.073182  303486 cache.go:56] Caching tarball of preloaded images
	I0920 19:01:29.073269  303486 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:01:29.073285  303486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 19:01:29.073388  303486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 19:01:29.073573  303486 start.go:360] acquireMachinesLock for old-k8s-version-425599: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:01:33.166258  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:36.238261  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:42.318195  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:45.390223  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:51.470272  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:54.542277  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:00.622232  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:03.694275  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:09.774241  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:12.846248  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:18.926213  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:21.998195  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:28.078192  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:31.150239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:37.230160  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:40.302224  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:46.382225  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:49.454205  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:55.534186  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:58.606232  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:04.686254  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:07.758234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:13.838239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:16.910321  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:22.990234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:26.062339  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:32.142210  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:35.214256  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:41.294234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:44.366288  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:50.446215  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:53.518266  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:59.598190  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:02.670240  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:08.750179  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:11.822239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:17.902176  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:20.974235  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:23.977804  302869 start.go:364] duration metric: took 4m19.519175605s to acquireMachinesLock for "embed-certs-339897"
	I0920 19:04:23.977868  302869 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:04:23.977876  302869 fix.go:54] fixHost starting: 
	I0920 19:04:23.978233  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:04:23.978277  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:04:23.993804  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0920 19:04:23.994326  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:04:23.994906  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:04:23.994925  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:04:23.995219  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:04:23.995413  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:23.995575  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:04:23.997417  302869 fix.go:112] recreateIfNeeded on embed-certs-339897: state=Stopped err=<nil>
	I0920 19:04:23.997439  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	W0920 19:04:23.997636  302869 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:04:24.001021  302869 out.go:177] * Restarting existing kvm2 VM for "embed-certs-339897" ...
	I0920 19:04:24.002636  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Start
	I0920 19:04:24.002842  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring networks are active...
	I0920 19:04:24.003916  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring network default is active
	I0920 19:04:24.004282  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring network mk-embed-certs-339897 is active
	I0920 19:04:24.004647  302869 main.go:141] libmachine: (embed-certs-339897) Getting domain xml...
	I0920 19:04:24.005446  302869 main.go:141] libmachine: (embed-certs-339897) Creating domain...
	I0920 19:04:23.975096  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:04:23.975155  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:04:23.975457  302538 buildroot.go:166] provisioning hostname "no-preload-037711"
	I0920 19:04:23.975485  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:04:23.975712  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:04:23.977607  302538 machine.go:96] duration metric: took 4m37.412034117s to provisionDockerMachine
	I0920 19:04:23.977703  302538 fix.go:56] duration metric: took 4m37.437032108s for fixHost
	I0920 19:04:23.977718  302538 start.go:83] releasing machines lock for "no-preload-037711", held for 4m37.437081737s
	W0920 19:04:23.977745  302538 start.go:714] error starting host: provision: host is not running
	W0920 19:04:23.977850  302538 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 19:04:23.977859  302538 start.go:729] Will try again in 5 seconds ...
	I0920 19:04:25.258221  302869 main.go:141] libmachine: (embed-certs-339897) Waiting to get IP...
	I0920 19:04:25.259119  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.259493  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.259584  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.259481  304091 retry.go:31] will retry after 212.462393ms: waiting for machine to come up
	I0920 19:04:25.474057  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.474524  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.474564  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.474441  304091 retry.go:31] will retry after 306.01691ms: waiting for machine to come up
	I0920 19:04:25.782264  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.782729  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.782753  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.782706  304091 retry.go:31] will retry after 416.637796ms: waiting for machine to come up
	I0920 19:04:26.201336  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:26.201704  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:26.201738  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:26.201645  304091 retry.go:31] will retry after 583.373452ms: waiting for machine to come up
	I0920 19:04:26.786448  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:26.786854  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:26.786876  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:26.786807  304091 retry.go:31] will retry after 760.706965ms: waiting for machine to come up
	I0920 19:04:27.548786  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:27.549126  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:27.549149  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:27.549088  304091 retry.go:31] will retry after 615.829194ms: waiting for machine to come up
	I0920 19:04:28.167061  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:28.167601  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:28.167647  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:28.167419  304091 retry.go:31] will retry after 786.700064ms: waiting for machine to come up
	I0920 19:04:28.955294  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:28.955658  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:28.955685  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:28.955611  304091 retry.go:31] will retry after 1.309567829s: waiting for machine to come up
	I0920 19:04:28.979506  302538 start.go:360] acquireMachinesLock for no-preload-037711: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:04:30.267104  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:30.267645  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:30.267676  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:30.267583  304091 retry.go:31] will retry after 1.153396834s: waiting for machine to come up
	I0920 19:04:31.423030  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:31.423604  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:31.423629  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:31.423542  304091 retry.go:31] will retry after 1.858288741s: waiting for machine to come up
	I0920 19:04:33.284886  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:33.285381  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:33.285419  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:33.285334  304091 retry.go:31] will retry after 2.343802005s: waiting for machine to come up
	I0920 19:04:35.630962  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:35.631408  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:35.631439  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:35.631359  304091 retry.go:31] will retry after 2.42254126s: waiting for machine to come up
	I0920 19:04:38.055128  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:38.055796  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:38.055854  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:38.055732  304091 retry.go:31] will retry after 3.877296828s: waiting for machine to come up
	I0920 19:04:43.362725  303063 start.go:364] duration metric: took 4m20.211671699s to acquireMachinesLock for "default-k8s-diff-port-612312"
	I0920 19:04:43.362794  303063 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:04:43.362810  303063 fix.go:54] fixHost starting: 
	I0920 19:04:43.363257  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:04:43.363315  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:04:43.380877  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34751
	I0920 19:04:43.381399  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:04:43.381894  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:04:43.381933  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:04:43.382364  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:04:43.382596  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:04:43.382746  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:04:43.384351  303063 fix.go:112] recreateIfNeeded on default-k8s-diff-port-612312: state=Stopped err=<nil>
	I0920 19:04:43.384379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	W0920 19:04:43.384540  303063 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:04:43.386969  303063 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-612312" ...
	I0920 19:04:41.936215  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.936789  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has current primary IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.936811  302869 main.go:141] libmachine: (embed-certs-339897) Found IP for machine: 192.168.72.72
	I0920 19:04:41.936823  302869 main.go:141] libmachine: (embed-certs-339897) Reserving static IP address...
	I0920 19:04:41.937386  302869 main.go:141] libmachine: (embed-certs-339897) Reserved static IP address: 192.168.72.72
	I0920 19:04:41.937412  302869 main.go:141] libmachine: (embed-certs-339897) Waiting for SSH to be available...
	I0920 19:04:41.937435  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "embed-certs-339897", mac: "52:54:00:dc:b1:41", ip: "192.168.72.72"} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:41.937466  302869 main.go:141] libmachine: (embed-certs-339897) DBG | skip adding static IP to network mk-embed-certs-339897 - found existing host DHCP lease matching {name: "embed-certs-339897", mac: "52:54:00:dc:b1:41", ip: "192.168.72.72"}
	I0920 19:04:41.937481  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Getting to WaitForSSH function...
	I0920 19:04:41.939673  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.940065  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:41.940089  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.940196  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Using SSH client type: external
	I0920 19:04:41.940223  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa (-rw-------)
	I0920 19:04:41.940261  302869 main.go:141] libmachine: (embed-certs-339897) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:04:41.940274  302869 main.go:141] libmachine: (embed-certs-339897) DBG | About to run SSH command:
	I0920 19:04:41.940285  302869 main.go:141] libmachine: (embed-certs-339897) DBG | exit 0
	I0920 19:04:42.065967  302869 main.go:141] libmachine: (embed-certs-339897) DBG | SSH cmd err, output: <nil>: 
	I0920 19:04:42.066357  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetConfigRaw
	I0920 19:04:42.067004  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:42.069586  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.069937  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.069968  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.070208  302869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/config.json ...
	I0920 19:04:42.070452  302869 machine.go:93] provisionDockerMachine start ...
	I0920 19:04:42.070478  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:42.070687  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.072878  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.073340  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.073375  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.073501  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.073701  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.073899  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.074080  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.074254  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.074504  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.074523  302869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:04:42.182250  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:04:42.182287  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.182543  302869 buildroot.go:166] provisioning hostname "embed-certs-339897"
	I0920 19:04:42.182570  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.182818  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.185497  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.185850  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.185886  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.186069  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.186274  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.186421  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.186568  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.186770  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.186986  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.187006  302869 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-339897 && echo "embed-certs-339897" | sudo tee /etc/hostname
	I0920 19:04:42.307656  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-339897
	
	I0920 19:04:42.307700  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.310572  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.310943  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.310970  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.311184  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.311382  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.311534  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.311663  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.311810  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.311984  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.312003  302869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-339897' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-339897/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-339897' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:04:42.426403  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:04:42.426440  302869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:04:42.426493  302869 buildroot.go:174] setting up certificates
	I0920 19:04:42.426502  302869 provision.go:84] configureAuth start
	I0920 19:04:42.426513  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.426822  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:42.429708  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.430134  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.430170  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.430328  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.432799  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.433222  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.433251  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.433383  302869 provision.go:143] copyHostCerts
	I0920 19:04:42.433466  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:04:42.433487  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:04:42.433549  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:04:42.433644  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:04:42.433652  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:04:42.433678  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:04:42.433735  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:04:42.433742  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:04:42.433762  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:04:42.433811  302869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.embed-certs-339897 san=[127.0.0.1 192.168.72.72 embed-certs-339897 localhost minikube]
	I0920 19:04:42.745528  302869 provision.go:177] copyRemoteCerts
	I0920 19:04:42.745599  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:04:42.745633  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.748247  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.748587  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.748619  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.748811  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.749014  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.749201  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.749334  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:42.831927  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:04:42.855674  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:04:42.879114  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 19:04:42.902982  302869 provision.go:87] duration metric: took 476.462339ms to configureAuth
	I0920 19:04:42.903019  302869 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:04:42.903236  302869 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:04:42.903321  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.906208  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.906580  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.906613  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.906810  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.907006  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.907136  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.907263  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.907427  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.907601  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.907616  302869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:04:43.127800  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:04:43.127847  302869 machine.go:96] duration metric: took 1.057372659s to provisionDockerMachine
	I0920 19:04:43.127864  302869 start.go:293] postStartSetup for "embed-certs-339897" (driver="kvm2")
	I0920 19:04:43.127890  302869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:04:43.127917  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.128263  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:04:43.128298  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.131648  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.132138  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.132173  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.132340  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.132560  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.132739  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.132896  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.216646  302869 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:04:43.220513  302869 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:04:43.220548  302869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:04:43.220629  302869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:04:43.220734  302869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:04:43.220862  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:04:43.230506  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:04:43.252894  302869 start.go:296] duration metric: took 125.003067ms for postStartSetup
	I0920 19:04:43.252943  302869 fix.go:56] duration metric: took 19.275066559s for fixHost
	I0920 19:04:43.252971  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.255999  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.256378  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.256406  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.256634  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.256858  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.257047  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.257214  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.257382  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:43.257546  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:43.257556  302869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:04:43.362516  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859083.339291891
	
	I0920 19:04:43.362545  302869 fix.go:216] guest clock: 1726859083.339291891
	I0920 19:04:43.362553  302869 fix.go:229] Guest: 2024-09-20 19:04:43.339291891 +0000 UTC Remote: 2024-09-20 19:04:43.25294824 +0000 UTC m=+278.942139838 (delta=86.343651ms)
	I0920 19:04:43.362585  302869 fix.go:200] guest clock delta is within tolerance: 86.343651ms
	I0920 19:04:43.362591  302869 start.go:83] releasing machines lock for "embed-certs-339897", held for 19.38474105s
	I0920 19:04:43.362620  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.362970  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:43.365988  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.366359  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.366380  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.366610  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367130  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367326  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367423  302869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:04:43.367469  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.367602  302869 ssh_runner.go:195] Run: cat /version.json
	I0920 19:04:43.367628  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.370233  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370594  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.370624  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370649  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370804  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.370998  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.371169  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.371191  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.371249  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.371406  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.371470  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.371566  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.371721  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.371885  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.490023  302869 ssh_runner.go:195] Run: systemctl --version
	I0920 19:04:43.496615  302869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:04:43.643493  302869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:04:43.649492  302869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:04:43.649560  302869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:04:43.665423  302869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:04:43.665460  302869 start.go:495] detecting cgroup driver to use...
	I0920 19:04:43.665530  302869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:04:43.681288  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:04:43.695161  302869 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:04:43.695218  302869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:04:43.708772  302869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:04:43.722803  302869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:04:43.834054  302869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:04:43.966014  302869 docker.go:233] disabling docker service ...
	I0920 19:04:43.966102  302869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:04:43.982324  302869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:04:43.995351  302869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:04:44.135635  302869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:04:44.262661  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:04:44.277377  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:04:44.299889  302869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:04:44.299965  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.312434  302869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:04:44.312534  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.323052  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.333504  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.343704  302869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:04:44.354386  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.364308  302869 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.383581  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.395013  302869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:04:44.405227  302869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:04:44.405279  302869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:04:44.418685  302869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:04:44.431323  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:04:44.558582  302869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:04:44.644003  302869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:04:44.644091  302869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:04:44.649434  302869 start.go:563] Will wait 60s for crictl version
	I0920 19:04:44.649498  302869 ssh_runner.go:195] Run: which crictl
	I0920 19:04:44.653334  302869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:04:44.695896  302869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:04:44.696004  302869 ssh_runner.go:195] Run: crio --version
	I0920 19:04:44.726148  302869 ssh_runner.go:195] Run: crio --version
	I0920 19:04:44.757340  302869 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:04:43.388378  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Start
	I0920 19:04:43.388603  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring networks are active...
	I0920 19:04:43.389387  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring network default is active
	I0920 19:04:43.389863  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring network mk-default-k8s-diff-port-612312 is active
	I0920 19:04:43.390364  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Getting domain xml...
	I0920 19:04:43.391121  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Creating domain...
	I0920 19:04:44.718004  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting to get IP...
	I0920 19:04:44.718885  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.719317  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.719413  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:44.719288  304227 retry.go:31] will retry after 197.63251ms: waiting for machine to come up
	I0920 19:04:44.919026  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.919516  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.919547  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:44.919475  304227 retry.go:31] will retry after 305.409091ms: waiting for machine to come up
	I0920 19:04:45.227550  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.228191  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.228224  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:45.228147  304227 retry.go:31] will retry after 311.72219ms: waiting for machine to come up
	I0920 19:04:45.541945  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.542374  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.542403  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:45.542344  304227 retry.go:31] will retry after 547.369471ms: waiting for machine to come up
	I0920 19:04:46.091199  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.091731  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.091765  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:46.091693  304227 retry.go:31] will retry after 519.190971ms: waiting for machine to come up
	I0920 19:04:46.612175  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.612641  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.612672  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:46.612591  304227 retry.go:31] will retry after 715.908704ms: waiting for machine to come up
	I0920 19:04:47.330911  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:47.331350  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:47.331379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:47.331294  304227 retry.go:31] will retry after 898.358136ms: waiting for machine to come up
	I0920 19:04:44.759090  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:44.762331  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:44.762696  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:44.762728  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:44.762954  302869 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 19:04:44.767209  302869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:04:44.781327  302869 kubeadm.go:883] updating cluster {Name:embed-certs-339897 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:04:44.781465  302869 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:04:44.781512  302869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:04:44.817356  302869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:04:44.817422  302869 ssh_runner.go:195] Run: which lz4
	I0920 19:04:44.821534  302869 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:04:44.826169  302869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:04:44.826205  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 19:04:46.160290  302869 crio.go:462] duration metric: took 1.338795677s to copy over tarball
	I0920 19:04:46.160379  302869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:04:48.265535  302869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.105118482s)
	I0920 19:04:48.265580  302869 crio.go:469] duration metric: took 2.105250135s to extract the tarball
	I0920 19:04:48.265588  302869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:04:48.302529  302869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:04:48.346391  302869 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:04:48.346419  302869 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:04:48.346427  302869 kubeadm.go:934] updating node { 192.168.72.72 8443 v1.31.1 crio true true} ...
	I0920 19:04:48.346566  302869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-339897 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:04:48.346668  302869 ssh_runner.go:195] Run: crio config
	I0920 19:04:48.396798  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:04:48.396824  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:04:48.396834  302869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:04:48.396866  302869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.72 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-339897 NodeName:embed-certs-339897 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:04:48.397043  302869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-339897"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:04:48.397121  302869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:04:48.407031  302869 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:04:48.407118  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:04:48.416554  302869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 19:04:48.432540  302869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:04:48.448042  302869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0920 19:04:48.465193  302869 ssh_runner.go:195] Run: grep 192.168.72.72	control-plane.minikube.internal$ /etc/hosts
	I0920 19:04:48.469083  302869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:04:48.481123  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:04:48.609883  302869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:04:48.627512  302869 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897 for IP: 192.168.72.72
	I0920 19:04:48.627545  302869 certs.go:194] generating shared ca certs ...
	I0920 19:04:48.627571  302869 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:04:48.627784  302869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:04:48.627851  302869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:04:48.627866  302869 certs.go:256] generating profile certs ...
	I0920 19:04:48.628032  302869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/client.key
	I0920 19:04:48.628143  302869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.key.308547ed
	I0920 19:04:48.628206  302869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.key
	I0920 19:04:48.628375  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:04:48.628421  302869 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:04:48.628435  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:04:48.628470  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:04:48.628509  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:04:48.628542  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:04:48.628616  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:04:48.629569  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:04:48.656203  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:04:48.708322  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:04:48.737686  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:04:48.772198  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 19:04:48.812086  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:04:48.836038  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:04:48.859972  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:04:48.883881  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:04:48.908399  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:04:48.930787  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:04:48.954052  302869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:04:48.970257  302869 ssh_runner.go:195] Run: openssl version
	I0920 19:04:48.976072  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:04:48.986449  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.990765  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.990833  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.996437  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:04:49.007111  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:04:49.017548  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.022044  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.022108  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.027752  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:04:49.038538  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:04:49.049445  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.054018  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.054100  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.059842  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:04:49.070748  302869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:04:49.075195  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:04:49.081100  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:04:49.086844  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:04:49.092790  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:04:49.098664  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:04:49.104562  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:04:49.110818  302869 kubeadm.go:392] StartCluster: {Name:embed-certs-339897 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:04:49.110952  302869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:04:49.111003  302869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:04:49.157700  302869 cri.go:89] found id: ""
	I0920 19:04:49.157774  302869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:04:49.168314  302869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:04:49.168339  302869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:04:49.168385  302869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:04:49.178632  302869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:04:49.179681  302869 kubeconfig.go:125] found "embed-certs-339897" server: "https://192.168.72.72:8443"
	I0920 19:04:49.181624  302869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:04:49.192084  302869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.72
	I0920 19:04:49.192159  302869 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:04:49.192188  302869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:04:49.192265  302869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:04:49.229141  302869 cri.go:89] found id: ""
	I0920 19:04:49.229232  302869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:04:49.247628  302869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:04:49.258190  302869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:04:49.258211  302869 kubeadm.go:157] found existing configuration files:
	
	I0920 19:04:49.258270  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:04:49.267769  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:04:49.267837  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:04:49.277473  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:04:49.286639  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:04:49.286712  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:04:49.296295  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:04:49.305705  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:04:49.305787  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:04:49.315191  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:04:49.324206  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:04:49.324288  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:04:49.334065  302869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:04:49.344823  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:48.231405  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:48.231846  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:48.231872  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:48.231795  304227 retry.go:31] will retry after 1.105264539s: waiting for machine to come up
	I0920 19:04:49.338940  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:49.339413  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:49.339437  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:49.339366  304227 retry.go:31] will retry after 1.638536651s: waiting for machine to come up
	I0920 19:04:50.980320  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:50.980774  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:50.980805  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:50.980714  304227 retry.go:31] will retry after 2.064766522s: waiting for machine to come up
	I0920 19:04:49.450454  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.412643  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.629144  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.694547  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.756897  302869 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:04:50.757008  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:51.258120  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:51.758025  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.258040  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.757302  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.774867  302869 api_server.go:72] duration metric: took 2.017964832s to wait for apiserver process to appear ...
	I0920 19:04:52.774906  302869 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:04:52.774954  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.383214  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:04:55.383255  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:04:55.383272  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.406625  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:04:55.406660  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:04:55.775825  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.785126  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:04:55.785157  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:04:56.275864  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:56.284002  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:04:56.284032  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:04:56.775547  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:56.779999  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 200:
	ok
	I0920 19:04:56.786034  302869 api_server.go:141] control plane version: v1.31.1
	I0920 19:04:56.786066  302869 api_server.go:131] duration metric: took 4.011153019s to wait for apiserver health ...
	I0920 19:04:56.786076  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:04:56.786082  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:04:56.788195  302869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:04:53.047487  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:53.048005  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:53.048027  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:53.047958  304227 retry.go:31] will retry after 2.829648578s: waiting for machine to come up
	I0920 19:04:55.879069  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:55.879538  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:55.879562  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:55.879488  304227 retry.go:31] will retry after 3.029828813s: waiting for machine to come up
	I0920 19:04:56.789703  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:04:56.799605  302869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:04:56.816974  302869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:04:56.828470  302869 system_pods.go:59] 8 kube-system pods found
	I0920 19:04:56.828582  302869 system_pods.go:61] "coredns-7c65d6cfc9-xnfsk" [5e34a8b9-d748-484a-92ab-0d288ab5f35e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:04:56.828610  302869 system_pods.go:61] "etcd-embed-certs-339897" [1d0e8303-0ab9-418c-ba2d-f0ba33abad36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:04:56.828637  302869 system_pods.go:61] "kube-apiserver-embed-certs-339897" [35569778-54b1-456d-8822-5a53a5e336fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:04:56.828655  302869 system_pods.go:61] "kube-controller-manager-embed-certs-339897" [6b9db655-59a1-4975-b3c7-fcc29a912850] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:04:56.828677  302869 system_pods.go:61] "kube-proxy-xs4nd" [a32f4c96-ae6e-4402-89c5-0226a4412d17] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:04:56.828694  302869 system_pods.go:61] "kube-scheduler-embed-certs-339897" [81dd07df-2ba9-4f8e-bb16-263bd6496a0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:04:56.828716  302869 system_pods.go:61] "metrics-server-6867b74b74-qqhcw" [b720a331-05ef-4528-bd25-0c1e7ef66b16] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:04:56.828729  302869 system_pods.go:61] "storage-provisioner" [08674813-f61d-49e9-a714-5f38b95f058e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:04:56.828738  302869 system_pods.go:74] duration metric: took 11.732519ms to wait for pod list to return data ...
	I0920 19:04:56.828748  302869 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:04:56.835747  302869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:04:56.835786  302869 node_conditions.go:123] node cpu capacity is 2
	I0920 19:04:56.835799  302869 node_conditions.go:105] duration metric: took 7.044914ms to run NodePressure ...
	I0920 19:04:56.835822  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:57.221422  302869 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:04:57.225575  302869 kubeadm.go:739] kubelet initialised
	I0920 19:04:57.225601  302869 kubeadm.go:740] duration metric: took 4.150722ms waiting for restarted kubelet to initialise ...
	I0920 19:04:57.225610  302869 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:04:57.230469  302869 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace to be "Ready" ...
	I0920 19:04:59.237961  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:58.911412  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:58.911990  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:58.912020  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:58.911956  304227 retry.go:31] will retry after 3.428044067s: waiting for machine to come up
	I0920 19:05:02.343216  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.343633  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Found IP for machine: 192.168.50.230
	I0920 19:05:02.343668  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has current primary IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.343679  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Reserving static IP address...
	I0920 19:05:02.344038  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Reserved static IP address: 192.168.50.230
	I0920 19:05:02.344084  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-612312", mac: "52:54:00:fa:2b:63", ip: "192.168.50.230"} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.344097  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for SSH to be available...
	I0920 19:05:02.344123  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | skip adding static IP to network mk-default-k8s-diff-port-612312 - found existing host DHCP lease matching {name: "default-k8s-diff-port-612312", mac: "52:54:00:fa:2b:63", ip: "192.168.50.230"}
	I0920 19:05:02.344136  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Getting to WaitForSSH function...
	I0920 19:05:02.346591  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.346932  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.346957  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.347110  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Using SSH client type: external
	I0920 19:05:02.347157  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa (-rw-------)
	I0920 19:05:02.347194  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:02.347214  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | About to run SSH command:
	I0920 19:05:02.347227  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | exit 0
	I0920 19:05:02.474040  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:02.474475  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetConfigRaw
	I0920 19:05:02.475160  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:02.477963  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.478338  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.478361  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.478680  303063 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/config.json ...
	I0920 19:05:02.478923  303063 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:02.478949  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:02.479166  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.481380  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.481759  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.481797  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.481961  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.482149  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.482307  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.482458  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.482619  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.482883  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.482900  303063 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:02.586360  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:02.586395  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.586694  303063 buildroot.go:166] provisioning hostname "default-k8s-diff-port-612312"
	I0920 19:05:02.586720  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.586951  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.589692  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.590053  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.590080  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.590230  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.590420  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.590563  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.590722  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.590936  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.591112  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.591126  303063 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-612312 && echo "default-k8s-diff-port-612312" | sudo tee /etc/hostname
	I0920 19:05:02.707768  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-612312
	
	I0920 19:05:02.707799  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.710647  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.711035  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.711064  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.711234  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.711448  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.711622  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.711791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.711938  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.712098  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.712116  303063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-612312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-612312/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-612312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:02.828234  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:02.828274  303063 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:02.828314  303063 buildroot.go:174] setting up certificates
	I0920 19:05:02.828327  303063 provision.go:84] configureAuth start
	I0920 19:05:02.828340  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.828700  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:02.831997  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.832469  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.832503  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.832704  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.835280  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.835577  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.835608  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.835699  303063 provision.go:143] copyHostCerts
	I0920 19:05:02.835766  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:02.835787  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:02.835848  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:02.835947  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:02.835955  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:02.835975  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:02.836026  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:02.836033  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:02.836055  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:02.836103  303063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-612312 san=[127.0.0.1 192.168.50.230 default-k8s-diff-port-612312 localhost minikube]
	I0920 19:05:02.983437  303063 provision.go:177] copyRemoteCerts
	I0920 19:05:02.983510  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:02.983541  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.986435  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.986791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.986835  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.987110  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.987289  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.987438  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.987579  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.674961  303486 start.go:364] duration metric: took 3m34.601349843s to acquireMachinesLock for "old-k8s-version-425599"
	I0920 19:05:03.675039  303486 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:05:03.675048  303486 fix.go:54] fixHost starting: 
	I0920 19:05:03.675480  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:03.675541  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:03.694201  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I0920 19:05:03.694642  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:03.695198  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:05:03.695221  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:03.695609  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:03.695793  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:03.695935  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetState
	I0920 19:05:03.697838  303486 fix.go:112] recreateIfNeeded on old-k8s-version-425599: state=Stopped err=<nil>
	I0920 19:05:03.697885  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	W0920 19:05:03.698080  303486 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:05:03.700333  303486 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-425599" ...
	I0920 19:05:03.701947  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .Start
	I0920 19:05:03.702184  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring networks are active...
	I0920 19:05:03.703106  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network default is active
	I0920 19:05:03.703645  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network mk-old-k8s-version-425599 is active
	I0920 19:05:03.704152  303486 main.go:141] libmachine: (old-k8s-version-425599) Getting domain xml...
	I0920 19:05:03.704942  303486 main.go:141] libmachine: (old-k8s-version-425599) Creating domain...
	I0920 19:05:01.738488  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:03.238934  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:03.238968  302869 pod_ready.go:82] duration metric: took 6.008471722s for pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.238978  302869 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.746041  302869 pod_ready.go:93] pod "etcd-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:03.746069  302869 pod_ready.go:82] duration metric: took 507.084418ms for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.746078  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.072306  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 19:05:03.096078  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:03.122027  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:03.150314  303063 provision.go:87] duration metric: took 321.970593ms to configureAuth
	I0920 19:05:03.150345  303063 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:03.150557  303063 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:03.150650  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.153988  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.154472  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.154524  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.154631  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.154840  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.155194  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.155397  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.155741  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:03.155990  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:03.156011  303063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:03.417981  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:03.418020  303063 machine.go:96] duration metric: took 939.078754ms to provisionDockerMachine
	I0920 19:05:03.418038  303063 start.go:293] postStartSetup for "default-k8s-diff-port-612312" (driver="kvm2")
	I0920 19:05:03.418052  303063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:03.418083  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.418456  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:03.418496  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.421689  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.422245  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.422282  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.422539  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.422747  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.422991  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.423144  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.509122  303063 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:03.515233  303063 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:03.515263  303063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:03.515343  303063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:03.515441  303063 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:03.515561  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:03.529346  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:03.559267  303063 start.go:296] duration metric: took 141.209592ms for postStartSetup
	I0920 19:05:03.559320  303063 fix.go:56] duration metric: took 20.196510123s for fixHost
	I0920 19:05:03.559348  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.563599  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.564320  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.564354  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.564605  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.564917  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.565120  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.565379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.565588  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:03.565813  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:03.565827  303063 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:03.674803  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859103.651785276
	
	I0920 19:05:03.674833  303063 fix.go:216] guest clock: 1726859103.651785276
	I0920 19:05:03.674840  303063 fix.go:229] Guest: 2024-09-20 19:05:03.651785276 +0000 UTC Remote: 2024-09-20 19:05:03.559326363 +0000 UTC m=+280.560675514 (delta=92.458913ms)
	I0920 19:05:03.674862  303063 fix.go:200] guest clock delta is within tolerance: 92.458913ms
	I0920 19:05:03.674867  303063 start.go:83] releasing machines lock for "default-k8s-diff-port-612312", held for 20.312097182s
	I0920 19:05:03.674897  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.675183  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:03.677975  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.678374  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.678406  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.678552  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679080  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679255  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679380  303063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:03.679429  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.679442  303063 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:03.679472  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.682443  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.682733  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.682876  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.682902  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.683014  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.683081  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.683104  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.683222  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.683326  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.683440  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.683512  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.683634  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.683721  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.683753  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.766786  303063 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:03.806684  303063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:03.950032  303063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:03.957153  303063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:03.957230  303063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:03.976784  303063 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:03.976814  303063 start.go:495] detecting cgroup driver to use...
	I0920 19:05:03.976902  303063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:03.994391  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:04.009961  303063 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:04.010021  303063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:04.023827  303063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:04.038585  303063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:04.157489  303063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:04.320396  303063 docker.go:233] disabling docker service ...
	I0920 19:05:04.320477  303063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:04.334865  303063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:04.350776  303063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:04.469438  303063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:04.596055  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:04.610548  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:04.629128  303063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:05:04.629192  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.640211  303063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:04.640289  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.650877  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.661863  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.672695  303063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:04.684141  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.696358  303063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.714936  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.726155  303063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:04.737400  303063 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:04.737460  303063 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:04.752752  303063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:04.767664  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:04.892509  303063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:04.992361  303063 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:04.992465  303063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:04.997119  303063 start.go:563] Will wait 60s for crictl version
	I0920 19:05:04.997215  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:05:05.001132  303063 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:05.050835  303063 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:05.050955  303063 ssh_runner.go:195] Run: crio --version
	I0920 19:05:05.079870  303063 ssh_runner.go:195] Run: crio --version
	I0920 19:05:05.112325  303063 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:05:05.113600  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:05.116591  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:05.117037  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:05.117075  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:05.117334  303063 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:05.122086  303063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:05.135489  303063 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-612312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:05.135682  303063 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:05:05.135776  303063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:05.174026  303063 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:05:05.174090  303063 ssh_runner.go:195] Run: which lz4
	I0920 19:05:05.179003  303063 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:05:05.184119  303063 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:05:05.184168  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 19:05:06.479331  303063 crio.go:462] duration metric: took 1.300388015s to copy over tarball
	I0920 19:05:06.479434  303063 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:05:05.040094  303486 main.go:141] libmachine: (old-k8s-version-425599) Waiting to get IP...
	I0920 19:05:05.041198  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.041615  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.041711  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.041616  304380 retry.go:31] will retry after 264.073086ms: waiting for machine to come up
	I0920 19:05:05.307229  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.307761  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.307784  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.307713  304380 retry.go:31] will retry after 317.541552ms: waiting for machine to come up
	I0920 19:05:05.627262  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.627903  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.627929  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.627797  304380 retry.go:31] will retry after 432.236037ms: waiting for machine to come up
	I0920 19:05:06.062368  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:06.062842  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:06.062873  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:06.062804  304380 retry.go:31] will retry after 525.683807ms: waiting for machine to come up
	I0920 19:05:06.590915  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:06.591405  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:06.591434  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:06.591355  304380 retry.go:31] will retry after 542.00244ms: waiting for machine to come up
	I0920 19:05:07.135388  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:07.135944  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:07.135998  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:07.135908  304380 retry.go:31] will retry after 886.798885ms: waiting for machine to come up
	I0920 19:05:08.024147  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:08.024684  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:08.024713  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:08.024596  304380 retry.go:31] will retry after 826.869965ms: waiting for machine to come up
	I0920 19:05:08.853176  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:08.853793  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:08.853828  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:08.853736  304380 retry.go:31] will retry after 1.007422775s: waiting for machine to come up
	I0920 19:05:05.756992  302869 pod_ready.go:103] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:08.255312  302869 pod_ready.go:103] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:08.656490  303063 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1770136s)
	I0920 19:05:08.656529  303063 crio.go:469] duration metric: took 2.177156837s to extract the tarball
	I0920 19:05:08.656539  303063 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:05:08.693153  303063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:08.733444  303063 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:05:08.733473  303063 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:05:08.733484  303063 kubeadm.go:934] updating node { 192.168.50.230 8444 v1.31.1 crio true true} ...
	I0920 19:05:08.733624  303063 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-612312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:05:08.733710  303063 ssh_runner.go:195] Run: crio config
	I0920 19:05:08.777872  303063 cni.go:84] Creating CNI manager for ""
	I0920 19:05:08.777913  303063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:08.777927  303063 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:05:08.777957  303063 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.230 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-612312 NodeName:default-k8s-diff-port-612312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:05:08.778143  303063 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.230
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-612312"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:05:08.778220  303063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:05:08.788133  303063 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:05:08.788208  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:05:08.797461  303063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0920 19:05:08.814111  303063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:05:08.832188  303063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0920 19:05:08.849801  303063 ssh_runner.go:195] Run: grep 192.168.50.230	control-plane.minikube.internal$ /etc/hosts
	I0920 19:05:08.853809  303063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:08.865685  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:08.985881  303063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:09.002387  303063 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312 for IP: 192.168.50.230
	I0920 19:05:09.002417  303063 certs.go:194] generating shared ca certs ...
	I0920 19:05:09.002441  303063 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:09.002656  303063 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:05:09.002727  303063 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:05:09.002741  303063 certs.go:256] generating profile certs ...
	I0920 19:05:09.002859  303063 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/client.key
	I0920 19:05:09.002940  303063 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.key.637d18af
	I0920 19:05:09.002990  303063 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.key
	I0920 19:05:09.003207  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:05:09.003248  303063 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:05:09.003256  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:05:09.003278  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:05:09.003306  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:05:09.003328  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:05:09.003365  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:09.004030  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:05:09.037203  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:05:09.068858  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:05:09.095082  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:05:09.122167  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 19:05:09.147953  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:05:09.174251  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:05:09.202438  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:05:09.231354  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:05:09.256365  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:05:09.282589  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:05:09.308610  303063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:05:09.328798  303063 ssh_runner.go:195] Run: openssl version
	I0920 19:05:09.334685  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:05:09.345947  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.350772  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.350838  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.356595  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:05:09.367559  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:05:09.380638  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.385362  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.385429  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.391299  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:05:09.402065  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:05:09.412841  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.417074  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.417138  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.422761  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:05:09.433780  303063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:05:09.438734  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:05:09.444888  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:05:09.450715  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:05:09.456993  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:05:09.462716  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:05:09.468847  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:05:09.474680  303063 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-612312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:05:09.474780  303063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:05:09.474844  303063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:09.513886  303063 cri.go:89] found id: ""
	I0920 19:05:09.514006  303063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:09.524385  303063 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:05:09.524417  303063 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:05:09.524479  303063 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:05:09.534288  303063 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:05:09.535251  303063 kubeconfig.go:125] found "default-k8s-diff-port-612312" server: "https://192.168.50.230:8444"
	I0920 19:05:09.537293  303063 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:05:09.547753  303063 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.230
	I0920 19:05:09.547796  303063 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:05:09.547812  303063 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:05:09.547890  303063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:09.590656  303063 cri.go:89] found id: ""
	I0920 19:05:09.590743  303063 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:05:09.607426  303063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:05:09.617258  303063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:05:09.617280  303063 kubeadm.go:157] found existing configuration files:
	
	I0920 19:05:09.617344  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 19:05:09.626725  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:05:09.626813  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:05:09.636421  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 19:05:09.645711  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:05:09.645780  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:05:09.655351  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 19:05:09.664771  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:05:09.664833  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:05:09.674556  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 19:05:09.683677  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:05:09.683821  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:05:09.695159  303063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:05:09.704995  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:09.821398  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.642045  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.870266  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.935191  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:11.015669  303063 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:05:11.015787  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:11.516670  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:12.016486  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:12.516070  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:13.016012  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:13.031718  303063 api_server.go:72] duration metric: took 2.016048489s to wait for apiserver process to appear ...
	I0920 19:05:13.031752  303063 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:05:13.031781  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:13.032414  303063 api_server.go:269] stopped: https://192.168.50.230:8444/healthz: Get "https://192.168.50.230:8444/healthz": dial tcp 192.168.50.230:8444: connect: connection refused
	I0920 19:05:09.863227  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:09.863693  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:09.863721  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:09.863640  304380 retry.go:31] will retry after 1.556199895s: waiting for machine to come up
	I0920 19:05:11.422510  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:11.423244  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:11.423271  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:11.423179  304380 retry.go:31] will retry after 1.670177778s: waiting for machine to come up
	I0920 19:05:13.095982  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:13.096600  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:13.096626  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:13.096545  304380 retry.go:31] will retry after 2.71780554s: waiting for machine to come up
	I0920 19:05:10.256325  302869 pod_ready.go:93] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.256352  302869 pod_ready.go:82] duration metric: took 6.510267221s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.256361  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.263229  302869 pod_ready.go:93] pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.263254  302869 pod_ready.go:82] duration metric: took 6.886052ms for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.263264  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xs4nd" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.270014  302869 pod_ready.go:93] pod "kube-proxy-xs4nd" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.270040  302869 pod_ready.go:82] duration metric: took 6.769102ms for pod "kube-proxy-xs4nd" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.270049  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.277232  302869 pod_ready.go:93] pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.277262  302869 pod_ready.go:82] duration metric: took 7.203732ms for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.277275  302869 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:12.284083  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:14.284983  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:13.532830  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:15.579530  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:05:15.579567  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:05:15.579584  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:15.596526  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:05:15.596570  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:05:16.032011  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:16.039310  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:05:16.039346  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:05:16.531881  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:16.536703  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:05:16.536736  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:05:17.032322  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:17.036979  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 200:
	ok
	I0920 19:05:17.043667  303063 api_server.go:141] control plane version: v1.31.1
	I0920 19:05:17.043701  303063 api_server.go:131] duration metric: took 4.011936277s to wait for apiserver health ...
	I0920 19:05:17.043710  303063 cni.go:84] Creating CNI manager for ""
	I0920 19:05:17.043716  303063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:17.045376  303063 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:05:17.046579  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:05:17.056771  303063 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:05:17.076571  303063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:05:17.085546  303063 system_pods.go:59] 8 kube-system pods found
	I0920 19:05:17.085584  303063 system_pods.go:61] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:05:17.085591  303063 system_pods.go:61] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:05:17.085597  303063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:05:17.085608  303063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:05:17.085615  303063 system_pods.go:61] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:05:17.085624  303063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:05:17.085631  303063 system_pods.go:61] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:05:17.085638  303063 system_pods.go:61] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:05:17.085646  303063 system_pods.go:74] duration metric: took 9.051189ms to wait for pod list to return data ...
	I0920 19:05:17.085657  303063 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:05:17.089161  303063 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:05:17.089190  303063 node_conditions.go:123] node cpu capacity is 2
	I0920 19:05:17.089201  303063 node_conditions.go:105] duration metric: took 3.534622ms to run NodePressure ...
	I0920 19:05:17.089218  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:17.442957  303063 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:05:17.447222  303063 kubeadm.go:739] kubelet initialised
	I0920 19:05:17.447247  303063 kubeadm.go:740] duration metric: took 4.255349ms waiting for restarted kubelet to initialise ...
	I0920 19:05:17.447255  303063 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:17.451839  303063 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.457216  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.457240  303063 pod_ready.go:82] duration metric: took 5.361636ms for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.457250  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.457256  303063 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.462245  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.462273  303063 pod_ready.go:82] duration metric: took 5.009342ms for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.462313  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.462326  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.468060  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.468087  303063 pod_ready.go:82] duration metric: took 5.75409ms for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.468099  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.468105  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.479703  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.479727  303063 pod_ready.go:82] duration metric: took 11.614638ms for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.479739  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.479750  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.879555  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-proxy-zp8l5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.879582  303063 pod_ready.go:82] duration metric: took 399.824208ms for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.879592  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-proxy-zp8l5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.879599  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:18.281551  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.281585  303063 pod_ready.go:82] duration metric: took 401.976884ms for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:18.281601  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.281611  303063 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:18.680674  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.680711  303063 pod_ready.go:82] duration metric: took 399.091849ms for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:18.680723  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.680730  303063 pod_ready.go:39] duration metric: took 1.233465539s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:18.680747  303063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:05:18.692948  303063 ops.go:34] apiserver oom_adj: -16
	I0920 19:05:18.692970  303063 kubeadm.go:597] duration metric: took 9.168545987s to restartPrimaryControlPlane
	I0920 19:05:18.692981  303063 kubeadm.go:394] duration metric: took 9.218309896s to StartCluster
	I0920 19:05:18.692999  303063 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:18.693078  303063 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:05:18.694921  303063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:18.695293  303063 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:05:18.696157  303063 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:18.696187  303063 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:05:18.696357  303063 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696377  303063 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.696387  303063 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:05:18.696419  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.696449  303063 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696495  303063 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-612312"
	I0920 19:05:18.696506  303063 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696588  303063 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.696610  303063 addons.go:243] addon metrics-server should already be in state true
	I0920 19:05:18.696709  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.697239  303063 out.go:177] * Verifying Kubernetes components...
	I0920 19:05:18.697334  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697386  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.697409  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697409  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697442  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.697531  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.698927  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:18.713346  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I0920 19:05:18.713346  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35127
	I0920 19:05:18.713967  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.714000  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.714472  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.714491  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.714572  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.714588  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.714961  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.714965  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.715163  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.715842  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.715893  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.717732  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I0920 19:05:18.718289  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.718553  303063 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.718575  303063 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:05:18.718604  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.718827  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.718852  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.718926  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.718956  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.719243  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.719782  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.719826  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.733219  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0920 19:05:18.733789  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.734403  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.734422  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.734463  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41091
	I0920 19:05:18.734905  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.734993  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.735207  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.735363  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.735394  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.735703  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.736264  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.736321  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.737489  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.739977  303063 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:05:18.740477  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39547
	I0920 19:05:18.741217  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.741752  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:05:18.741770  303063 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:05:18.741791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.741854  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.741875  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.742351  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.742547  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.744800  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.746006  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.746416  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.746442  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.746695  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.746961  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.746974  303063 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:15.815519  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:15.816035  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:15.816065  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:15.815974  304380 retry.go:31] will retry after 2.62788631s: waiting for machine to come up
	I0920 19:05:18.446768  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:18.447219  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:18.447240  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:18.447166  304380 retry.go:31] will retry after 4.025841071s: waiting for machine to come up
	I0920 19:05:16.784503  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:18.785829  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:18.747159  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.747332  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.748881  303063 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:05:18.748901  303063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:05:18.748932  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.752335  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.752787  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.752812  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.753180  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.753340  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.753491  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.753628  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.755106  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0920 19:05:18.755543  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.756159  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.756182  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.756521  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.756710  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.758400  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.758674  303063 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:05:18.758690  303063 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:05:18.758707  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.762208  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.762748  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.762776  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.762950  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.763235  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.763518  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.763678  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.900876  303063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:18.919923  303063 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-612312" to be "Ready" ...
	I0920 19:05:18.993779  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:05:18.993814  303063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:05:19.001703  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:05:19.019424  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:05:19.054174  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:05:19.054202  303063 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:05:19.123651  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:05:19.123682  303063 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:05:19.186745  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:05:19.369866  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.369898  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.370210  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.370229  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:19.370246  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.370270  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.370552  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.370593  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:19.370625  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:19.380105  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.380140  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.380456  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.380472  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.145346  303063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.12587258s)
	I0920 19:05:20.145412  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.145427  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.145769  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:20.145834  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.145846  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.145866  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.145877  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.146126  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.146144  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152067  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.152084  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.152361  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.152379  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152388  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.152395  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.152625  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.152662  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:20.152711  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152729  303063 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-612312"
	I0920 19:05:20.154940  303063 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 19:05:20.156326  303063 addons.go:510] duration metric: took 1.460148296s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 19:05:20.923687  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:22.924271  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:23.791151  302538 start.go:364] duration metric: took 54.811585482s to acquireMachinesLock for "no-preload-037711"
	I0920 19:05:23.791208  302538 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:05:23.791219  302538 fix.go:54] fixHost starting: 
	I0920 19:05:23.791657  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:23.791696  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:23.809350  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0920 19:05:23.809873  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:23.810520  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:05:23.810555  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:23.810893  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:23.811118  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:23.811286  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:05:23.812885  302538 fix.go:112] recreateIfNeeded on no-preload-037711: state=Stopped err=<nil>
	I0920 19:05:23.812914  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	W0920 19:05:23.813135  302538 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:05:23.815287  302538 out.go:177] * Restarting existing kvm2 VM for "no-preload-037711" ...
	I0920 19:05:22.477850  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.478419  303486 main.go:141] libmachine: (old-k8s-version-425599) Found IP for machine: 192.168.39.53
	I0920 19:05:22.478454  303486 main.go:141] libmachine: (old-k8s-version-425599) Reserving static IP address...
	I0920 19:05:22.478473  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has current primary IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.478983  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "old-k8s-version-425599", mac: "52:54:00:d2:a5:70", ip: "192.168.39.53"} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.479021  303486 main.go:141] libmachine: (old-k8s-version-425599) Reserved static IP address: 192.168.39.53
	I0920 19:05:22.479040  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | skip adding static IP to network mk-old-k8s-version-425599 - found existing host DHCP lease matching {name: "old-k8s-version-425599", mac: "52:54:00:d2:a5:70", ip: "192.168.39.53"}
	I0920 19:05:22.479055  303486 main.go:141] libmachine: (old-k8s-version-425599) Waiting for SSH to be available...
	I0920 19:05:22.479067  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Getting to WaitForSSH function...
	I0920 19:05:22.481118  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.481359  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.481382  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.481556  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH client type: external
	I0920 19:05:22.481570  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa (-rw-------)
	I0920 19:05:22.481600  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:22.481612  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | About to run SSH command:
	I0920 19:05:22.481627  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | exit 0
	I0920 19:05:22.606383  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:22.606783  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetConfigRaw
	I0920 19:05:22.607408  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:22.610155  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.610474  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.610506  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.610784  303486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 19:05:22.611075  303486 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:22.611103  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:22.611332  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.613838  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.614250  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.614283  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.614395  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.614609  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.614776  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.614950  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.615136  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.615331  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.615344  303486 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:22.718330  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:22.718363  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.718651  303486 buildroot.go:166] provisioning hostname "old-k8s-version-425599"
	I0920 19:05:22.718697  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.718913  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.722027  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.722334  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.722370  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.722559  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.722738  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.722909  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.723086  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.723261  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.723473  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.723491  303486 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-425599 && echo "old-k8s-version-425599" | sudo tee /etc/hostname
	I0920 19:05:22.841563  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-425599
	
	I0920 19:05:22.841592  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.844327  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.844716  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.844748  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.844970  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.845154  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.845306  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.845413  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.845570  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.845793  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.845818  303486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-425599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-425599/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-425599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:22.959542  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:22.959572  303486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:22.959615  303486 buildroot.go:174] setting up certificates
	I0920 19:05:22.959625  303486 provision.go:84] configureAuth start
	I0920 19:05:22.959635  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.959972  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:22.962506  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.962845  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.962883  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.963020  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.965352  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.965734  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.965755  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.965936  303486 provision.go:143] copyHostCerts
	I0920 19:05:22.965999  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:22.966018  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:22.966073  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:22.966165  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:22.966173  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:22.966193  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:22.966250  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:22.966257  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:22.966274  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:22.966368  303486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-425599 san=[127.0.0.1 192.168.39.53 localhost minikube old-k8s-version-425599]
	I0920 19:05:23.156245  303486 provision.go:177] copyRemoteCerts
	I0920 19:05:23.156322  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:23.156356  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.159694  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.160062  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.160105  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.160283  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.160467  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.160633  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.160755  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.244439  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:23.271796  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 19:05:23.298124  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:23.323466  303486 provision.go:87] duration metric: took 363.82725ms to configureAuth
	I0920 19:05:23.323496  303486 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:23.323711  303486 config.go:182] Loaded profile config "old-k8s-version-425599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:05:23.323805  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.326985  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.327336  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.327363  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.327573  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.327788  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.328003  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.328161  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.328315  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:23.328492  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:23.328506  303486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:23.559721  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:23.559755  303486 machine.go:96] duration metric: took 948.663189ms to provisionDockerMachine
	I0920 19:05:23.559770  303486 start.go:293] postStartSetup for "old-k8s-version-425599" (driver="kvm2")
	I0920 19:05:23.559781  303486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:23.559812  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.560186  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:23.560225  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.563146  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.563462  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.563491  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.563786  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.564018  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.564214  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.564365  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.645013  303486 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:23.649198  303486 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:23.649230  303486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:23.649300  303486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:23.649416  303486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:23.649544  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:23.659351  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:23.683405  303486 start.go:296] duration metric: took 123.617289ms for postStartSetup
	I0920 19:05:23.683466  303486 fix.go:56] duration metric: took 20.008417985s for fixHost
	I0920 19:05:23.683495  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.686540  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.686962  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.686988  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.687209  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.687445  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.687624  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.687803  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.688001  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:23.688188  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:23.688206  303486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:23.790992  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859123.767729644
	
	I0920 19:05:23.791024  303486 fix.go:216] guest clock: 1726859123.767729644
	I0920 19:05:23.791035  303486 fix.go:229] Guest: 2024-09-20 19:05:23.767729644 +0000 UTC Remote: 2024-09-20 19:05:23.683472425 +0000 UTC m=+234.770765310 (delta=84.257219ms)
	I0920 19:05:23.791061  303486 fix.go:200] guest clock delta is within tolerance: 84.257219ms
	I0920 19:05:23.791068  303486 start.go:83] releasing machines lock for "old-k8s-version-425599", held for 20.116056408s
	I0920 19:05:23.791101  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.791432  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:23.794540  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.795015  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.795048  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.795226  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.795779  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.795992  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.796129  303486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:23.796180  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.796241  303486 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:23.796265  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.799032  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799374  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.799399  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799418  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799540  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.799743  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.799874  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.799890  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.799906  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.800084  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.800077  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.800198  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.800365  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.800514  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.924885  303486 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:23.932642  303486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:21.284671  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:23.284813  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:24.083860  303486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:24.090360  303486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:24.090444  303486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:24.112281  303486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:24.112310  303486 start.go:495] detecting cgroup driver to use...
	I0920 19:05:24.112383  303486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:24.136600  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:24.154552  303486 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:24.154631  303486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:24.170600  303486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:24.186071  303486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:24.319752  303486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:24.498299  303486 docker.go:233] disabling docker service ...
	I0920 19:05:24.498385  303486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:24.515762  303486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:24.533482  303486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:24.687481  303486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:24.820191  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:24.835255  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:24.856179  303486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 19:05:24.856253  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.868991  303486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:24.869080  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.884074  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.898732  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.911016  303486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:24.922757  303486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:24.937719  303486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:24.937828  303486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:24.955496  303486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:24.966347  303486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:25.114758  303486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:25.226807  303486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:25.226984  303486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:25.234576  303486 start.go:563] Will wait 60s for crictl version
	I0920 19:05:25.234664  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:25.238739  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:25.282242  303486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:25.282344  303486 ssh_runner.go:195] Run: crio --version
	I0920 19:05:25.317733  303486 ssh_runner.go:195] Run: crio --version
	I0920 19:05:25.353767  303486 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 19:05:23.816707  302538 main.go:141] libmachine: (no-preload-037711) Calling .Start
	I0920 19:05:23.817003  302538 main.go:141] libmachine: (no-preload-037711) Ensuring networks are active...
	I0920 19:05:23.817953  302538 main.go:141] libmachine: (no-preload-037711) Ensuring network default is active
	I0920 19:05:23.818345  302538 main.go:141] libmachine: (no-preload-037711) Ensuring network mk-no-preload-037711 is active
	I0920 19:05:23.818824  302538 main.go:141] libmachine: (no-preload-037711) Getting domain xml...
	I0920 19:05:23.819705  302538 main.go:141] libmachine: (no-preload-037711) Creating domain...
	I0920 19:05:25.216298  302538 main.go:141] libmachine: (no-preload-037711) Waiting to get IP...
	I0920 19:05:25.217452  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.218073  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.218138  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.218047  304582 retry.go:31] will retry after 256.299732ms: waiting for machine to come up
	I0920 19:05:25.475745  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.476451  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.476485  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.476388  304582 retry.go:31] will retry after 298.732749ms: waiting for machine to come up
	I0920 19:05:25.777093  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.777731  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.777755  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.777701  304582 retry.go:31] will retry after 360.011383ms: waiting for machine to come up
	I0920 19:05:26.139480  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:26.140100  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:26.140132  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:26.140049  304582 retry.go:31] will retry after 593.756705ms: waiting for machine to come up
	I0920 19:05:24.924455  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:26.425132  303063 node_ready.go:49] node "default-k8s-diff-port-612312" has status "Ready":"True"
	I0920 19:05:26.425165  303063 node_ready.go:38] duration metric: took 7.505210484s for node "default-k8s-diff-port-612312" to be "Ready" ...
	I0920 19:05:26.425181  303063 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:26.433394  303063 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:26.440462  303063 pod_ready.go:93] pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:26.440497  303063 pod_ready.go:82] duration metric: took 7.072952ms for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:26.440513  303063 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:25.354959  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:25.358179  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:25.358467  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:25.358495  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:25.358739  303486 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:25.362714  303486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:25.375880  303486 kubeadm.go:883] updating cluster {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:25.376024  303486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:05:25.376074  303486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:25.420224  303486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:05:25.420307  303486 ssh_runner.go:195] Run: which lz4
	I0920 19:05:25.424775  303486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:05:25.430102  303486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:05:25.430151  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 19:05:27.014068  303486 crio.go:462] duration metric: took 1.589333502s to copy over tarball
	I0920 19:05:27.014160  303486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:05:25.786282  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:27.788058  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:26.735924  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:26.736558  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:26.736582  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:26.736458  304582 retry.go:31] will retry after 712.118443ms: waiting for machine to come up
	I0920 19:05:27.450059  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:27.450696  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:27.450719  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:27.450592  304582 retry.go:31] will retry after 588.649809ms: waiting for machine to come up
	I0920 19:05:28.041216  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:28.041760  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:28.041791  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:28.041691  304582 retry.go:31] will retry after 869.42079ms: waiting for machine to come up
	I0920 19:05:28.912809  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:28.913240  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:28.913265  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:28.913174  304582 retry.go:31] will retry after 1.410011475s: waiting for machine to come up
	I0920 19:05:30.324367  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:30.324952  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:30.324980  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:30.324875  304582 retry.go:31] will retry after 1.398358739s: waiting for machine to come up
	I0920 19:05:28.454512  303063 pod_ready.go:103] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:31.546557  303063 pod_ready.go:103] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:32.072690  303063 pod_ready.go:93] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.072719  303063 pod_ready.go:82] duration metric: took 5.632196538s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.072734  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.081029  303063 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.081062  303063 pod_ready.go:82] duration metric: took 8.319382ms for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.081076  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.087314  303063 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.087338  303063 pod_ready.go:82] duration metric: took 6.253184ms for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.087351  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.093286  303063 pod_ready.go:93] pod "kube-proxy-zp8l5" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.093313  303063 pod_ready.go:82] duration metric: took 5.953425ms for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.093326  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.098529  303063 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.098553  303063 pod_ready.go:82] duration metric: took 5.218413ms for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.098565  303063 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:30.096727  303486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.082523066s)
	I0920 19:05:30.096778  303486 crio.go:469] duration metric: took 3.082671461s to extract the tarball
	I0920 19:05:30.096789  303486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:05:30.148059  303486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:30.184547  303486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:05:30.184578  303486 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 19:05:30.184672  303486 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:30.184686  303486 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.184711  303486 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.184730  303486 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.184732  303486 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.184686  303486 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 19:05:30.184693  303486 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.184792  303486 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.186558  303486 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:30.186609  303486 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 19:05:30.186607  303486 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.186616  303486 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.186688  303486 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.186698  303486 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.186701  303486 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.186565  303486 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.425283  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 19:05:30.469378  303486 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 19:05:30.469448  303486 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 19:05:30.469514  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.475453  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.493250  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.505003  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.513203  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.514365  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.521729  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.533265  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.580710  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.613984  303486 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 19:05:30.614033  303486 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.614085  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.653094  303486 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 19:05:30.653150  303486 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.653205  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675697  303486 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 19:05:30.675730  303486 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 19:05:30.675752  303486 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.675762  303486 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.675805  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675820  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675805  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.709199  303486 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 19:05:30.709261  303486 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.709310  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.720146  303486 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 19:05:30.720198  303486 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.720233  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.720313  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.720241  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.720374  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.720247  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.737444  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.737487  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 19:05:30.843272  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.843362  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.843366  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.860414  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.860462  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.860430  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.954641  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.982227  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.982263  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:31.041996  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:31.042032  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:31.042650  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:31.042722  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:31.070786  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 19:05:31.120407  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 19:05:31.135751  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 19:05:31.163591  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 19:05:31.164483  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 19:05:31.164587  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 19:05:31.345957  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:31.486337  303486 cache_images.go:92] duration metric: took 1.301737533s to LoadCachedImages
	W0920 19:05:31.486434  303486 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 19:05:31.486452  303486 kubeadm.go:934] updating node { 192.168.39.53 8443 v1.20.0 crio true true} ...
	I0920 19:05:31.486576  303486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-425599 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:05:31.486661  303486 ssh_runner.go:195] Run: crio config
	I0920 19:05:31.544181  303486 cni.go:84] Creating CNI manager for ""
	I0920 19:05:31.544215  303486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:31.544229  303486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:05:31.544257  303486 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-425599 NodeName:old-k8s-version-425599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 19:05:31.544465  303486 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-425599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:05:31.544556  303486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 19:05:31.559445  303486 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:05:31.559542  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:05:31.570446  303486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0920 19:05:31.588741  303486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:05:31.606454  303486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0920 19:05:31.624483  303486 ssh_runner.go:195] Run: grep 192.168.39.53	control-plane.minikube.internal$ /etc/hosts
	I0920 19:05:31.628285  303486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:31.641039  303486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:31.771690  303486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:31.789746  303486 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599 for IP: 192.168.39.53
	I0920 19:05:31.789775  303486 certs.go:194] generating shared ca certs ...
	I0920 19:05:31.789806  303486 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:31.790074  303486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:05:31.790150  303486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:05:31.790165  303486 certs.go:256] generating profile certs ...
	I0920 19:05:31.798117  303486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/client.key
	I0920 19:05:31.798270  303486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key.e78cb154
	I0920 19:05:31.798333  303486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key
	I0920 19:05:31.798499  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:05:31.798543  303486 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:05:31.798557  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:05:31.798608  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:05:31.798659  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:05:31.798692  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:05:31.798748  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:31.799624  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:05:31.843298  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:05:31.877299  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:05:31.909777  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:05:31.947787  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 19:05:31.991175  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:05:32.019393  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:05:32.048475  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:05:32.084354  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:05:32.112161  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:05:32.138991  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:05:32.167653  303486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:05:32.185485  303486 ssh_runner.go:195] Run: openssl version
	I0920 19:05:32.192030  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:05:32.203761  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.209550  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.209650  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.216277  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:05:32.228192  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:05:32.239984  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.244782  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.244848  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.250865  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:05:32.262035  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:05:32.273790  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.279335  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.279414  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.286501  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:05:32.298052  303486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:05:32.303064  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:05:32.309973  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:05:32.316704  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:05:32.323166  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:05:32.330126  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:05:32.336554  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:05:32.343303  303486 kubeadm.go:392] StartCluster: {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:05:32.343413  303486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:05:32.343473  303486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:32.387562  303486 cri.go:89] found id: ""
	I0920 19:05:32.387653  303486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:32.398143  303486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:05:32.398167  303486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:05:32.398222  303486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:05:32.407776  303486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:05:32.409205  303486 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-425599" does not appear in /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:05:32.410267  303486 kubeconfig.go:62] /home/jenkins/minikube-integration/19679-237658/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-425599" cluster setting kubeconfig missing "old-k8s-version-425599" context setting]
	I0920 19:05:32.411776  303486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:32.457074  303486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:05:32.468055  303486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.53
	I0920 19:05:32.468113  303486 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:05:32.468132  303486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:05:32.468211  303486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:32.505151  303486 cri.go:89] found id: ""
	I0920 19:05:32.505241  303486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:05:32.521391  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:05:32.531705  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:05:32.531728  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:05:32.531774  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:05:32.541137  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:05:32.541219  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:05:32.550684  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:05:32.560262  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:05:32.560352  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:05:32.569735  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:05:32.579126  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:05:32.579199  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:05:32.589508  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:05:32.600985  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:05:32.601100  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:05:32.611511  303486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:05:32.622346  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:32.755562  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:33.793472  303486 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.037864747s)
	I0920 19:05:33.793513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:30.283826  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:32.285077  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:31.725721  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:31.726171  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:31.726198  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:31.726127  304582 retry.go:31] will retry after 2.32427136s: waiting for machine to come up
	I0920 19:05:34.052412  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:34.053005  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:34.053043  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:34.052923  304582 retry.go:31] will retry after 2.159036217s: waiting for machine to come up
	I0920 19:05:36.215059  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:36.215561  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:36.215585  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:36.215501  304582 retry.go:31] will retry after 3.424610182s: waiting for machine to come up
	I0920 19:05:34.105780  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:36.106491  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:34.021260  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:34.142176  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:34.235507  303486 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:05:34.235618  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:34.736586  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:35.236065  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:35.735783  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:36.236406  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:36.736243  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:37.235994  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:37.736168  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:38.236559  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:38.736139  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:34.784743  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:37.282598  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.284890  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.642163  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:39.642600  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:39.642642  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:39.642541  304582 retry.go:31] will retry after 3.073679854s: waiting for machine to come up
	I0920 19:05:38.116192  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:40.605958  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.236010  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:39.735723  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:40.236003  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:40.735741  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.235689  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.736411  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:42.236028  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:42.735814  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:43.236391  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:43.736174  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.783707  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:43.784197  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:42.719195  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.719748  302538 main.go:141] libmachine: (no-preload-037711) Found IP for machine: 192.168.61.136
	I0920 19:05:42.719775  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has current primary IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.719780  302538 main.go:141] libmachine: (no-preload-037711) Reserving static IP address...
	I0920 19:05:42.720201  302538 main.go:141] libmachine: (no-preload-037711) Reserved static IP address: 192.168.61.136
	I0920 19:05:42.720220  302538 main.go:141] libmachine: (no-preload-037711) Waiting for SSH to be available...
	I0920 19:05:42.720239  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "no-preload-037711", mac: "52:54:00:b0:ac:14", ip: "192.168.61.136"} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.720268  302538 main.go:141] libmachine: (no-preload-037711) DBG | skip adding static IP to network mk-no-preload-037711 - found existing host DHCP lease matching {name: "no-preload-037711", mac: "52:54:00:b0:ac:14", ip: "192.168.61.136"}
	I0920 19:05:42.720280  302538 main.go:141] libmachine: (no-preload-037711) DBG | Getting to WaitForSSH function...
	I0920 19:05:42.722402  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.722661  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.722686  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.722864  302538 main.go:141] libmachine: (no-preload-037711) DBG | Using SSH client type: external
	I0920 19:05:42.722877  302538 main.go:141] libmachine: (no-preload-037711) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa (-rw-------)
	I0920 19:05:42.722939  302538 main.go:141] libmachine: (no-preload-037711) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:42.722962  302538 main.go:141] libmachine: (no-preload-037711) DBG | About to run SSH command:
	I0920 19:05:42.722979  302538 main.go:141] libmachine: (no-preload-037711) DBG | exit 0
	I0920 19:05:42.850057  302538 main.go:141] libmachine: (no-preload-037711) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:42.850451  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetConfigRaw
	I0920 19:05:42.851176  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:42.853807  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.854268  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.854290  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.854558  302538 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/config.json ...
	I0920 19:05:42.854764  302538 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:42.854782  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:42.854999  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:42.857347  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.857683  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.857712  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.857892  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:42.858073  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.858242  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.858385  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:42.858569  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:42.858755  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:42.858766  302538 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:42.962098  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:42.962137  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:42.962455  302538 buildroot.go:166] provisioning hostname "no-preload-037711"
	I0920 19:05:42.962488  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:42.962696  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:42.965410  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.965793  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.965822  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.965954  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:42.966128  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.966285  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.966442  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:42.966650  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:42.966822  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:42.966847  302538 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-037711 && echo "no-preload-037711" | sudo tee /etc/hostname
	I0920 19:05:43.089291  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-037711
	
	I0920 19:05:43.089338  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.092213  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.092658  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.092689  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.092828  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.093031  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.093188  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.093305  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.093478  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.093692  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.093719  302538 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-037711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-037711/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-037711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:43.210625  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:43.210660  302538 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:43.210720  302538 buildroot.go:174] setting up certificates
	I0920 19:05:43.210740  302538 provision.go:84] configureAuth start
	I0920 19:05:43.210758  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:43.211093  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:43.213829  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.214346  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.214379  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.214542  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.216979  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.217294  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.217319  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.217461  302538 provision.go:143] copyHostCerts
	I0920 19:05:43.217526  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:43.217546  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:43.217610  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:43.217708  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:43.217720  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:43.217750  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:43.217885  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:43.217899  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:43.217947  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:43.218008  302538 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.no-preload-037711 san=[127.0.0.1 192.168.61.136 localhost minikube no-preload-037711]
	I0920 19:05:43.395507  302538 provision.go:177] copyRemoteCerts
	I0920 19:05:43.395576  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:43.395607  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.398288  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.398663  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.398694  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.398899  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.399087  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.399205  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.399324  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:43.488543  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 19:05:43.514793  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:43.537520  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:43.561983  302538 provision.go:87] duration metric: took 351.22541ms to configureAuth
	I0920 19:05:43.562021  302538 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:43.562213  302538 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:43.562292  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.565776  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.566235  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.566270  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.566486  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.566706  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.566895  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.567043  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.567251  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.567439  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.567454  302538 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:43.797110  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:43.797142  302538 machine.go:96] duration metric: took 942.364782ms to provisionDockerMachine
	I0920 19:05:43.797157  302538 start.go:293] postStartSetup for "no-preload-037711" (driver="kvm2")
	I0920 19:05:43.797171  302538 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:43.797193  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:43.797516  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:43.797546  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.800148  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.800532  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.800559  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.800794  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.800993  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.801158  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.801255  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:43.885788  302538 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:43.890070  302538 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:43.890108  302538 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:43.890198  302538 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:43.890293  302538 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:43.890405  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:43.899679  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:43.924928  302538 start.go:296] duration metric: took 127.752462ms for postStartSetup
	I0920 19:05:43.924973  302538 fix.go:56] duration metric: took 20.133755115s for fixHost
	I0920 19:05:43.924996  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.927678  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.928059  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.928099  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.928277  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.928517  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.928685  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.928815  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.928979  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.929190  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.929204  302538 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:44.042745  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859144.016675004
	
	I0920 19:05:44.042769  302538 fix.go:216] guest clock: 1726859144.016675004
	I0920 19:05:44.042776  302538 fix.go:229] Guest: 2024-09-20 19:05:44.016675004 +0000 UTC Remote: 2024-09-20 19:05:43.924977449 +0000 UTC m=+357.534412233 (delta=91.697555ms)
	I0920 19:05:44.042804  302538 fix.go:200] guest clock delta is within tolerance: 91.697555ms
	I0920 19:05:44.042819  302538 start.go:83] releasing machines lock for "no-preload-037711", held for 20.251627041s
	I0920 19:05:44.042842  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.043134  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:44.046071  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.046412  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.046440  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.046613  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047113  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047278  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047366  302538 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:44.047428  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:44.047520  302538 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:44.047548  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:44.050275  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050358  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050849  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.050872  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.050892  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050915  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.051095  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:44.051259  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:44.051259  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:44.051496  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:44.051637  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:44.051655  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:44.051789  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:44.051953  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:44.134420  302538 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:44.175303  302538 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:44.319129  302538 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:44.325894  302538 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:44.325975  302538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:44.341779  302538 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:44.341809  302538 start.go:495] detecting cgroup driver to use...
	I0920 19:05:44.341899  302538 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:44.358211  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:44.373240  302538 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:44.373327  302538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:44.387429  302538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:44.401684  302538 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:44.521292  302538 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:44.668050  302538 docker.go:233] disabling docker service ...
	I0920 19:05:44.668124  302538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:44.683196  302538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:44.696604  302538 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:44.843581  302538 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:44.959377  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:44.973472  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:44.991282  302538 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:05:44.991344  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.001696  302538 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:45.001776  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.012684  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.023288  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.034330  302538 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:45.045773  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.056332  302538 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.074730  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.085656  302538 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:45.096371  302538 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:45.096447  302538 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:45.112094  302538 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:45.123050  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:45.236136  302538 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:45.325978  302538 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:45.326065  302538 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:45.330452  302538 start.go:563] Will wait 60s for crictl version
	I0920 19:05:45.330527  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.334010  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:45.373622  302538 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:45.373736  302538 ssh_runner.go:195] Run: crio --version
	I0920 19:05:45.401279  302538 ssh_runner.go:195] Run: crio --version
	I0920 19:05:45.430445  302538 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:05:45.431717  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:45.434768  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:45.435094  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:45.435121  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:45.435335  302538 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:45.439275  302538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:45.451300  302538 kubeadm.go:883] updating cluster {Name:no-preload-037711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:45.451461  302538 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:05:45.451502  302538 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:45.485045  302538 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:05:45.485073  302538 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 19:05:45.485130  302538 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:45.485150  302538 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.485168  302538 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.485182  302538 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.485231  302538 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.485171  302538 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.485305  302538 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 19:05:45.485450  302538 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.486694  302538 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.486700  302538 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.486808  302538 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.486808  302538 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 19:05:45.486829  302538 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.486894  302538 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:45.486829  302538 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.487055  302538 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.708911  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 19:05:45.773014  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.815176  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.818274  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.818298  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.829644  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.850791  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.862553  302538 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 19:05:45.862616  302538 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.862680  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.907516  302538 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 19:05:45.907573  302538 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.907629  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.938640  302538 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 19:05:45.938715  302538 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.938755  302538 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 19:05:45.938799  302538 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.938845  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.938770  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.947658  302538 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 19:05:45.947706  302538 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.947757  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.965105  302538 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 19:05:45.965161  302538 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.965166  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.965191  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.965248  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.965282  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.965344  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.965350  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.044513  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.044640  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:46.077894  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:46.080113  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:46.080170  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:46.080239  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.155137  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.155188  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:46.208431  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:46.208477  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:46.208521  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.208565  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:46.290657  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.290694  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 19:05:46.290794  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.325206  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 19:05:46.325353  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:46.353181  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 19:05:46.353289  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 19:05:46.353307  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 19:05:46.353312  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:46.353331  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 19:05:46.353383  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:46.353418  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:46.353384  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.353512  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.379873  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 19:05:46.379934  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 19:05:46.379979  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 19:05:46.380024  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 19:05:46.379981  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 19:05:46.380134  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:43.105005  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:45.105781  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:47.604822  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:44.235886  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:44.736349  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.235783  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.736619  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:46.236082  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:46.736609  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:47.236078  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:47.736130  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:48.236218  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:48.735858  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.784555  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:47.785125  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:46.622278  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:48.339532  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.985991382s)
	I0920 19:05:48.339568  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 19:05:48.339594  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:48.339653  302538 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.959488823s)
	I0920 19:05:48.339685  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 19:05:48.339665  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:48.339742  302538 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.717432253s)
	I0920 19:05:48.339787  302538 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 19:05:48.339815  302538 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:48.339842  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:48.343725  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:50.823508  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.483779728s)
	I0920 19:05:50.823559  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.479795238s)
	I0920 19:05:50.823593  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 19:05:50.823637  302538 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:50.823649  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:50.823692  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:49.607326  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:51.609055  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:49.236645  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:49.736183  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.236642  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.735713  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:51.235862  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:51.736479  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:52.235726  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:52.735939  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:53.235759  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:53.736290  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.284090  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:52.284996  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:54.127303  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.303601736s)
	I0920 19:05:54.127415  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:54.127327  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.303608969s)
	I0920 19:05:54.127455  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 19:05:54.127488  302538 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:54.127530  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:56.202021  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.074563861s)
	I0920 19:05:56.202050  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.074501802s)
	I0920 19:05:56.202076  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 19:05:56.202095  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 19:05:56.202118  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:56.202184  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:56.202202  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:05:56.207141  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 19:05:54.104909  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:56.105373  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:54.235840  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:54.735817  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:55.235812  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:55.736410  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:56.236203  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:56.735713  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:57.235777  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:57.735835  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:58.236448  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:58.736010  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:54.783661  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:56.784770  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:58.785122  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:58.166303  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.964088667s)
	I0920 19:05:58.166340  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 19:05:58.166369  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:58.166424  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:59.625258  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.458808535s)
	I0920 19:05:59.625294  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 19:05:59.625318  302538 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:05:59.625361  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:06:00.572722  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 19:06:00.572768  302538 cache_images.go:123] Successfully loaded all cached images
	I0920 19:06:00.572774  302538 cache_images.go:92] duration metric: took 15.087689513s to LoadCachedImages
	I0920 19:06:00.572788  302538 kubeadm.go:934] updating node { 192.168.61.136 8443 v1.31.1 crio true true} ...
	I0920 19:06:00.572917  302538 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-037711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:06:00.572994  302538 ssh_runner.go:195] Run: crio config
	I0920 19:06:00.619832  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:06:00.619861  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:06:00.619875  302538 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:06:00.619910  302538 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-037711 NodeName:no-preload-037711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:06:00.620110  302538 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-037711"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:06:00.620181  302538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:06:00.630434  302538 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:06:00.630513  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:06:00.639447  302538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 19:06:00.656195  302538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:06:00.675718  302538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0920 19:06:00.709191  302538 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0920 19:06:00.713271  302538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:06:00.726826  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:06:00.850927  302538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:06:00.869014  302538 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711 for IP: 192.168.61.136
	I0920 19:06:00.869044  302538 certs.go:194] generating shared ca certs ...
	I0920 19:06:00.869109  302538 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:06:00.869331  302538 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:06:00.869393  302538 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:06:00.869405  302538 certs.go:256] generating profile certs ...
	I0920 19:06:00.869507  302538 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/client.key
	I0920 19:06:00.869589  302538 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.key.b5da98fb
	I0920 19:06:00.869654  302538 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.key
	I0920 19:06:00.869831  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:06:00.869877  302538 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:06:00.869890  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:06:00.869947  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:06:00.869981  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:06:00.870010  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:06:00.870068  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:06:00.870802  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:06:00.922699  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:06:00.953401  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:06:00.996889  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:06:01.024682  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 19:06:01.050412  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:06:01.081212  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:06:01.108337  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:06:01.133628  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:06:01.158805  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:06:01.186888  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:06:01.211771  302538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:06:01.229448  302538 ssh_runner.go:195] Run: openssl version
	I0920 19:06:01.235289  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:06:01.246775  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.251410  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.251472  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.257271  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:06:01.268229  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:06:01.280431  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.285643  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.285736  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.291772  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:06:01.302858  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:06:01.314034  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.319160  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.319235  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.325450  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:06:01.336803  302538 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:06:01.341439  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:06:01.347592  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:06:01.354109  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:06:01.360513  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:06:01.366749  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:06:01.372898  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:06:01.379101  302538 kubeadm.go:392] StartCluster: {Name:no-preload-037711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:06:01.379228  302538 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:06:01.379280  302538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:06:01.416896  302538 cri.go:89] found id: ""
	I0920 19:06:01.416972  302538 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:58.606203  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:00.606802  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:59.236283  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:59.736440  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:00.236142  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:00.735772  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.236360  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.736388  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:02.236462  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:02.736742  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.236457  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.736705  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.284596  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:03.784495  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:01.428611  302538 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:06:01.428636  302538 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:06:01.428685  302538 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:06:01.439392  302538 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:06:01.440512  302538 kubeconfig.go:125] found "no-preload-037711" server: "https://192.168.61.136:8443"
	I0920 19:06:01.442938  302538 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:06:01.452938  302538 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.136
	I0920 19:06:01.452982  302538 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:06:01.452999  302538 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:06:01.453062  302538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:06:01.487878  302538 cri.go:89] found id: ""
	I0920 19:06:01.487967  302538 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:06:01.506032  302538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:06:01.516536  302538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:06:01.516562  302538 kubeadm.go:157] found existing configuration files:
	
	I0920 19:06:01.516609  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:06:01.526718  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:06:01.526790  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:06:01.536809  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:06:01.546172  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:06:01.546243  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:06:01.556211  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:06:01.565796  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:06:01.565869  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:06:01.577089  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:06:01.587862  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:06:01.587985  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:06:01.598666  302538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:06:01.610018  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:01.740046  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.566817  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.784258  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.848752  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.933469  302538 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:06:02.933579  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.434385  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.933975  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.962422  302538 api_server.go:72] duration metric: took 1.028951755s to wait for apiserver process to appear ...
	I0920 19:06:03.962453  302538 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:06:03.962485  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:03.963119  302538 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": dial tcp 192.168.61.136:8443: connect: connection refused
	I0920 19:06:04.462843  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.443140  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:06:06.443178  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:06:06.443196  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.485554  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:06:06.485597  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:06:06.485614  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.566023  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:06.566068  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:06.963116  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.972764  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:06.972804  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:07.463432  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:07.470963  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:07.471000  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:07.962553  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:07.967724  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0920 19:06:07.975215  302538 api_server.go:141] control plane version: v1.31.1
	I0920 19:06:07.975248  302538 api_server.go:131] duration metric: took 4.01278814s to wait for apiserver health ...
	I0920 19:06:07.975258  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:06:07.975267  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:06:07.977455  302538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:06:03.106079  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:05.609475  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:04.236005  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:04.735854  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:05.236716  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:05.736668  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.235839  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.736412  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:07.236224  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:07.735830  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:08.235800  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:08.736645  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.284930  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:08.784854  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:07.979099  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:06:07.991210  302538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:06:08.016110  302538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:06:08.031124  302538 system_pods.go:59] 8 kube-system pods found
	I0920 19:06:08.031177  302538 system_pods.go:61] "coredns-7c65d6cfc9-8gmsq" [91d89ad2-f899-464c-b351-a0773c16223b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:06:08.031191  302538 system_pods.go:61] "etcd-no-preload-037711" [5b353ad3-0389-4e3d-b5c3-2f2bc65db200] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:06:08.031203  302538 system_pods.go:61] "kube-apiserver-no-preload-037711" [b19002c7-f891-4bc1-a2f0-0f6beebb3987] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:06:08.031247  302538 system_pods.go:61] "kube-controller-manager-no-preload-037711" [a5b1951d-7189-4ee3-bc28-bed058048ebb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:06:08.031262  302538 system_pods.go:61] "kube-proxy-zzmkv" [c8f4695b-eefd-407a-9b7c-d5078632d120] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:06:08.031270  302538 system_pods.go:61] "kube-scheduler-no-preload-037711" [b44824ba-52ad-4e86-9408-118f0e1852d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:06:08.031280  302538 system_pods.go:61] "metrics-server-6867b74b74-7xpgm" [f6280d56-5be4-475f-91da-2862e992868f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:06:08.031290  302538 system_pods.go:61] "storage-provisioner" [d1efb64f-d2a9-4bb4-9bc3-c643c415fcf2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:06:08.031300  302538 system_pods.go:74] duration metric: took 15.160935ms to wait for pod list to return data ...
	I0920 19:06:08.031310  302538 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:06:08.035903  302538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:06:08.035953  302538 node_conditions.go:123] node cpu capacity is 2
	I0920 19:06:08.035968  302538 node_conditions.go:105] duration metric: took 4.652846ms to run NodePressure ...
	I0920 19:06:08.035995  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:08.404721  302538 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:06:08.409400  302538 kubeadm.go:739] kubelet initialised
	I0920 19:06:08.409423  302538 kubeadm.go:740] duration metric: took 4.670172ms waiting for restarted kubelet to initialise ...
	I0920 19:06:08.409432  302538 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:06:08.416547  302538 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:10.426817  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:08.107050  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:10.606744  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:09.236127  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:09.735809  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.236585  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.735863  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:11.236700  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:11.736557  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:12.236483  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:12.735695  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:13.235905  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:13.736128  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.785471  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:13.284642  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:12.923811  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.423162  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.926280  302538 pod_ready.go:93] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:15.926318  302538 pod_ready.go:82] duration metric: took 7.509740963s for pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.926332  302538 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.932683  302538 pod_ready.go:93] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:15.932713  302538 pod_ready.go:82] duration metric: took 6.372388ms for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.932725  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:13.111190  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.606371  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:14.236234  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:14.736677  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.236499  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.735667  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:16.235774  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:16.735833  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:17.236149  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:17.735782  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:18.236400  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:18.736460  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.784441  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:18.284748  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:17.938853  302538 pod_ready.go:103] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:19.939569  302538 pod_ready.go:103] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:18.104867  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:20.105870  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:22.605773  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:19.236298  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:19.736672  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.236401  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.735810  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:21.235673  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:21.736112  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:22.235998  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:22.736179  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:23.236680  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:23.736388  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.783320  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:22.783590  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:21.939753  302538 pod_ready.go:93] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:21.939781  302538 pod_ready.go:82] duration metric: took 6.007035191s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:21.939794  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.446396  302538 pod_ready.go:93] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.446425  302538 pod_ready.go:82] duration metric: took 506.622064ms for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.446435  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zzmkv" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.452105  302538 pod_ready.go:93] pod "kube-proxy-zzmkv" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.452130  302538 pod_ready.go:82] duration metric: took 5.688419ms for pod "kube-proxy-zzmkv" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.452139  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.456181  302538 pod_ready.go:93] pod "kube-scheduler-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.456205  302538 pod_ready.go:82] duration metric: took 4.05917ms for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.456215  302538 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:24.463262  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:24.606021  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:27.105497  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:24.236369  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:24.736082  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:25.236694  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:25.736346  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.236075  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.736666  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:27.236418  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:27.736656  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:28.235972  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:28.735743  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:24.783673  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:26.783960  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.283970  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:26.962413  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.462423  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.606628  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:32.105603  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.236688  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:29.736132  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:30.236404  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:30.735733  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.236364  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.736031  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:32.236457  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:32.735751  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:33.236371  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:33.736474  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.284572  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:33.286630  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:31.464686  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:33.962309  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:35.963445  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:34.105897  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:36.605140  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:34.236387  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:34.236472  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:34.276702  303486 cri.go:89] found id: ""
	I0920 19:06:34.276735  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.276747  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:34.276758  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:34.276815  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:34.312886  303486 cri.go:89] found id: ""
	I0920 19:06:34.312923  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.312935  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:34.312950  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:34.313024  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:34.347199  303486 cri.go:89] found id: ""
	I0920 19:06:34.347240  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.347250  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:34.347258  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:34.347332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:34.383077  303486 cri.go:89] found id: ""
	I0920 19:06:34.383110  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.383121  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:34.383130  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:34.383202  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:34.421184  303486 cri.go:89] found id: ""
	I0920 19:06:34.421212  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.421222  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:34.421231  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:34.421304  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:34.459964  303486 cri.go:89] found id: ""
	I0920 19:06:34.459998  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.460009  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:34.460018  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:34.460085  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:34.493761  303486 cri.go:89] found id: ""
	I0920 19:06:34.493803  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.493815  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:34.493824  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:34.493894  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:34.534406  303486 cri.go:89] found id: ""
	I0920 19:06:34.534445  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.534457  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:34.534471  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:34.534496  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:34.607256  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:34.607297  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:34.644923  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:34.644953  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:34.693574  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:34.693622  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:34.707703  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:34.707742  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:34.846809  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:37.347895  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:37.377651  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:37.377728  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:37.430034  303486 cri.go:89] found id: ""
	I0920 19:06:37.430071  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.430079  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:37.430087  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:37.430156  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:37.467026  303486 cri.go:89] found id: ""
	I0920 19:06:37.467055  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.467063  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:37.467069  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:37.467120  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:37.505791  303486 cri.go:89] found id: ""
	I0920 19:06:37.505824  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.505835  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:37.505845  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:37.505943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:37.541519  303486 cri.go:89] found id: ""
	I0920 19:06:37.541556  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.541568  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:37.541577  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:37.541633  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:37.576088  303486 cri.go:89] found id: ""
	I0920 19:06:37.576126  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.576137  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:37.576146  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:37.576204  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:37.613039  303486 cri.go:89] found id: ""
	I0920 19:06:37.613074  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.613084  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:37.613091  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:37.613153  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:37.656440  303486 cri.go:89] found id: ""
	I0920 19:06:37.656473  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.656482  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:37.656489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:37.656555  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:37.693247  303486 cri.go:89] found id: ""
	I0920 19:06:37.693283  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.693292  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:37.693302  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:37.693321  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:37.769230  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:37.769280  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:37.811016  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:37.811058  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:37.865729  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:37.865773  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:37.880056  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:37.880094  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:37.956402  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:35.783789  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:37.787063  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:38.461824  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.465028  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:38.605494  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.605606  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.457303  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:40.473769  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:40.473848  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:40.511320  303486 cri.go:89] found id: ""
	I0920 19:06:40.511354  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.511363  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:40.511371  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:40.511433  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:40.547086  303486 cri.go:89] found id: ""
	I0920 19:06:40.547127  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.547138  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:40.547147  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:40.547216  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:40.580969  303486 cri.go:89] found id: ""
	I0920 19:06:40.581010  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.581022  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:40.581035  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:40.581098  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:40.615802  303486 cri.go:89] found id: ""
	I0920 19:06:40.615842  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.615851  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:40.615858  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:40.615931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:40.649398  303486 cri.go:89] found id: ""
	I0920 19:06:40.649444  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.649459  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:40.649467  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:40.649541  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:40.683124  303486 cri.go:89] found id: ""
	I0920 19:06:40.683160  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.683172  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:40.683181  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:40.683251  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:40.718005  303486 cri.go:89] found id: ""
	I0920 19:06:40.718032  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.718040  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:40.718047  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:40.718107  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:40.751965  303486 cri.go:89] found id: ""
	I0920 19:06:40.751992  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.752000  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:40.752010  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:40.752024  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:40.765195  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:40.765234  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:40.842287  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:40.842321  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:40.842338  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:40.928384  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:40.928430  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:40.970207  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:40.970242  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:43.526435  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:43.540582  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:43.540680  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:43.576798  303486 cri.go:89] found id: ""
	I0920 19:06:43.576837  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.576846  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:43.576852  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:43.576916  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:43.615261  303486 cri.go:89] found id: ""
	I0920 19:06:43.615290  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.615298  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:43.615305  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:43.615359  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:43.651214  303486 cri.go:89] found id: ""
	I0920 19:06:43.651251  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.651264  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:43.651277  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:43.651338  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:43.684483  303486 cri.go:89] found id: ""
	I0920 19:06:43.684523  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.684535  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:43.684544  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:43.684614  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:43.720996  303486 cri.go:89] found id: ""
	I0920 19:06:43.721026  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.721035  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:43.721041  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:43.721107  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:43.764445  303486 cri.go:89] found id: ""
	I0920 19:06:43.764482  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.764493  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:43.764501  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:43.764564  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:43.808848  303486 cri.go:89] found id: ""
	I0920 19:06:43.808878  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.808888  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:43.808897  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:43.808968  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:43.845462  303486 cri.go:89] found id: ""
	I0920 19:06:43.845491  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.845500  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:43.845511  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:43.845525  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:43.896550  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:43.896596  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:43.909243  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:43.909272  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:06:40.284735  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:42.783363  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:42.962289  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:44.963071  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:43.106353  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:45.606296  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	W0920 19:06:43.987455  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:43.987474  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:43.987491  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:44.063585  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:44.063629  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:46.602859  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:46.617286  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:46.617357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:46.653643  303486 cri.go:89] found id: ""
	I0920 19:06:46.653681  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.653693  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:46.653702  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:46.653778  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:46.691169  303486 cri.go:89] found id: ""
	I0920 19:06:46.691198  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.691206  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:46.691213  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:46.691271  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:46.725498  303486 cri.go:89] found id: ""
	I0920 19:06:46.725527  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.725538  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:46.725545  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:46.725614  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:46.758850  303486 cri.go:89] found id: ""
	I0920 19:06:46.758876  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.758884  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:46.758891  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:46.758942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:46.793648  303486 cri.go:89] found id: ""
	I0920 19:06:46.793683  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.793692  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:46.793699  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:46.793755  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:46.832908  303486 cri.go:89] found id: ""
	I0920 19:06:46.832940  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.832947  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:46.832953  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:46.833019  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:46.866450  303486 cri.go:89] found id: ""
	I0920 19:06:46.866502  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.866513  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:46.866522  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:46.866593  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:46.901966  303486 cri.go:89] found id: ""
	I0920 19:06:46.902001  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.902013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:46.902026  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:46.902041  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:46.948901  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:46.948946  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:46.963489  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:46.963534  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:47.041701  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:47.041722  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:47.041736  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:47.124320  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:47.124364  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:44.783818  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:46.784000  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:48.785175  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:46.963700  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:49.462018  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:48.104361  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:50.105520  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:52.605799  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:49.664255  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:49.677240  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:49.677322  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:49.712375  303486 cri.go:89] found id: ""
	I0920 19:06:49.712401  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.712409  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:49.712415  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:49.712476  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:49.747682  303486 cri.go:89] found id: ""
	I0920 19:06:49.747713  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.747721  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:49.747727  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:49.747783  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:49.782276  303486 cri.go:89] found id: ""
	I0920 19:06:49.782319  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.782329  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:49.782337  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:49.782400  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:49.822625  303486 cri.go:89] found id: ""
	I0920 19:06:49.822661  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.822672  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:49.822680  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:49.822751  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:49.862159  303486 cri.go:89] found id: ""
	I0920 19:06:49.862192  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.862202  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:49.862212  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:49.862281  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:49.897552  303486 cri.go:89] found id: ""
	I0920 19:06:49.897587  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.897595  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:49.897608  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:49.897667  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:49.931667  303486 cri.go:89] found id: ""
	I0920 19:06:49.931698  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.931709  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:49.931718  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:49.931774  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:49.969206  303486 cri.go:89] found id: ""
	I0920 19:06:49.969236  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.969244  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:49.969254  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:49.969266  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:50.019287  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:50.019328  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:50.033080  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:50.033113  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:50.106415  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:50.106442  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:50.106459  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:50.183710  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:50.183762  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:52.725443  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:52.739293  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:52.739386  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:52.772412  303486 cri.go:89] found id: ""
	I0920 19:06:52.772445  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.772454  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:52.772461  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:52.772528  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:52.811153  303486 cri.go:89] found id: ""
	I0920 19:06:52.811189  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.811197  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:52.811204  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:52.811260  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:52.848709  303486 cri.go:89] found id: ""
	I0920 19:06:52.848740  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.848749  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:52.848755  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:52.848811  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:52.883358  303486 cri.go:89] found id: ""
	I0920 19:06:52.883387  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.883394  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:52.883400  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:52.883455  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:52.917838  303486 cri.go:89] found id: ""
	I0920 19:06:52.917874  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.917893  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:52.917912  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:52.917982  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:52.952340  303486 cri.go:89] found id: ""
	I0920 19:06:52.952378  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.952387  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:52.952396  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:52.952471  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:52.986433  303486 cri.go:89] found id: ""
	I0920 19:06:52.986469  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.986478  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:52.986486  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:52.986582  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:53.024209  303486 cri.go:89] found id: ""
	I0920 19:06:53.024241  303486 logs.go:276] 0 containers: []
	W0920 19:06:53.024249  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:53.024260  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:53.024272  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:53.075336  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:53.075374  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:53.090761  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:53.090802  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:53.167883  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:53.167915  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:53.167933  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:53.242003  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:53.242044  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:50.785624  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:53.284212  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:51.462197  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:53.962545  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:55.962875  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:54.607806  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:57.105146  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:55.779107  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:55.793713  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:55.793802  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:55.829411  303486 cri.go:89] found id: ""
	I0920 19:06:55.829441  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.829450  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:55.829456  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:55.829513  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:55.864578  303486 cri.go:89] found id: ""
	I0920 19:06:55.864606  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.864617  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:55.864625  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:55.864686  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:55.897004  303486 cri.go:89] found id: ""
	I0920 19:06:55.897033  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.897041  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:55.897048  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:55.897106  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:55.931019  303486 cri.go:89] found id: ""
	I0920 19:06:55.931055  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.931066  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:55.931076  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:55.931141  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:55.966595  303486 cri.go:89] found id: ""
	I0920 19:06:55.966625  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.966635  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:55.966643  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:55.966693  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:55.999707  303486 cri.go:89] found id: ""
	I0920 19:06:55.999736  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.999747  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:55.999756  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:55.999825  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:56.034323  303486 cri.go:89] found id: ""
	I0920 19:06:56.034361  303486 logs.go:276] 0 containers: []
	W0920 19:06:56.034371  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:56.034377  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:56.034433  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:56.069019  303486 cri.go:89] found id: ""
	I0920 19:06:56.069048  303486 logs.go:276] 0 containers: []
	W0920 19:06:56.069056  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:56.069066  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:56.069077  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:56.122820  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:56.122860  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:56.136924  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:56.136966  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:56.216255  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:56.216284  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:56.216299  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:56.293461  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:56.293506  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:58.831252  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:58.844410  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:58.844474  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:58.877508  303486 cri.go:89] found id: ""
	I0920 19:06:58.877539  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.877547  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:58.877555  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:58.877613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:58.911284  303486 cri.go:89] found id: ""
	I0920 19:06:58.911315  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.911323  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:58.911329  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:58.911382  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:58.944646  303486 cri.go:89] found id: ""
	I0920 19:06:58.944675  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.944682  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:58.944688  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:58.944739  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:55.784379  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.283450  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.461839  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:00.461977  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:59.108066  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:01.605247  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.979752  303486 cri.go:89] found id: ""
	I0920 19:06:58.979787  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.979798  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:58.979807  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:58.979864  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:59.016613  303486 cri.go:89] found id: ""
	I0920 19:06:59.016649  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.016661  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:59.016670  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:59.016735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:59.052012  303486 cri.go:89] found id: ""
	I0920 19:06:59.052039  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.052047  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:59.052054  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:59.052106  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:59.090102  303486 cri.go:89] found id: ""
	I0920 19:06:59.090140  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.090152  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:59.090159  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:59.090213  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:59.128028  303486 cri.go:89] found id: ""
	I0920 19:06:59.128057  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.128068  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:59.128080  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:59.128096  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:59.142966  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:59.143012  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:59.227311  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:59.227336  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:59.227357  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:59.308319  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:59.308366  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:59.347299  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:59.347336  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:01.897644  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:01.912876  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:01.912951  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:01.956550  303486 cri.go:89] found id: ""
	I0920 19:07:01.956679  303486 logs.go:276] 0 containers: []
	W0920 19:07:01.956690  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:01.956700  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:01.956765  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:01.995391  303486 cri.go:89] found id: ""
	I0920 19:07:01.995425  303486 logs.go:276] 0 containers: []
	W0920 19:07:01.995433  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:01.995440  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:01.995501  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:02.031149  303486 cri.go:89] found id: ""
	I0920 19:07:02.031181  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.031193  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:02.031202  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:02.031273  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:02.065856  303486 cri.go:89] found id: ""
	I0920 19:07:02.065885  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.065894  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:02.065924  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:02.065981  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:02.101974  303486 cri.go:89] found id: ""
	I0920 19:07:02.102018  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.102032  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:02.102041  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:02.102115  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:02.138108  303486 cri.go:89] found id: ""
	I0920 19:07:02.138142  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.138151  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:02.138156  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:02.138217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:02.170136  303486 cri.go:89] found id: ""
	I0920 19:07:02.170165  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.170173  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:02.170179  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:02.170244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:02.203944  303486 cri.go:89] found id: ""
	I0920 19:07:02.203969  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.203978  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:02.203991  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:02.204008  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:02.256635  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:02.256679  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:02.270266  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:02.270303  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:02.341145  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:02.341182  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:02.341199  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:02.415133  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:02.415175  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:00.283726  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:02.285304  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:02.462310  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:04.462872  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:03.605300  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:06.105872  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:04.952448  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:04.966632  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:04.966702  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:05.001098  303486 cri.go:89] found id: ""
	I0920 19:07:05.001131  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.001141  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:05.001149  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:05.001217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:05.038160  303486 cri.go:89] found id: ""
	I0920 19:07:05.038186  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.038196  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:05.038202  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:05.038260  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:05.083301  303486 cri.go:89] found id: ""
	I0920 19:07:05.083346  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.083357  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:05.083365  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:05.083436  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:05.118916  303486 cri.go:89] found id: ""
	I0920 19:07:05.118952  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.118964  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:05.118972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:05.119065  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:05.157452  303486 cri.go:89] found id: ""
	I0920 19:07:05.157485  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.157496  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:05.157511  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:05.157587  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:05.197100  303486 cri.go:89] found id: ""
	I0920 19:07:05.197133  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.197143  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:05.197152  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:05.197225  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:05.231286  303486 cri.go:89] found id: ""
	I0920 19:07:05.231317  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.231328  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:05.231336  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:05.231409  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:05.269798  303486 cri.go:89] found id: ""
	I0920 19:07:05.269835  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.269847  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:05.269862  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:05.269882  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:05.310029  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:05.310068  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:05.360493  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:05.360537  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:05.373771  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:05.373815  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:05.449860  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:05.449886  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:05.449924  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:08.034520  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:08.049970  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:08.050040  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:08.084683  303486 cri.go:89] found id: ""
	I0920 19:07:08.084714  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.084724  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:08.084731  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:08.084799  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:08.121150  303486 cri.go:89] found id: ""
	I0920 19:07:08.121176  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.121183  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:08.121190  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:08.121244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:08.157830  303486 cri.go:89] found id: ""
	I0920 19:07:08.157865  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.157877  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:08.157891  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:08.157967  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:08.191040  303486 cri.go:89] found id: ""
	I0920 19:07:08.191082  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.191094  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:08.191102  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:08.191169  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:08.230194  303486 cri.go:89] found id: ""
	I0920 19:07:08.230230  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.230239  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:08.230246  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:08.230304  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:08.268526  303486 cri.go:89] found id: ""
	I0920 19:07:08.268558  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.268566  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:08.268573  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:08.268631  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:08.302383  303486 cri.go:89] found id: ""
	I0920 19:07:08.302411  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.302420  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:08.302428  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:08.302492  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:08.336435  303486 cri.go:89] found id: ""
	I0920 19:07:08.336469  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.336479  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:08.336491  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:08.336505  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:08.418086  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:08.418129  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:08.458355  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:08.458391  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:08.507017  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:08.507062  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:08.522701  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:08.522737  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:08.592777  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:04.784475  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:07.283612  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:09.286218  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:06.963106  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:09.462861  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:08.108458  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:10.605447  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:12.605992  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:11.093689  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:11.107438  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:11.107503  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:11.139701  303486 cri.go:89] found id: ""
	I0920 19:07:11.139742  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.139755  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:11.139765  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:11.139822  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:11.196143  303486 cri.go:89] found id: ""
	I0920 19:07:11.196182  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.196191  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:11.196197  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:11.196268  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:11.232121  303486 cri.go:89] found id: ""
	I0920 19:07:11.232156  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.232164  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:11.232171  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:11.232238  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:11.267307  303486 cri.go:89] found id: ""
	I0920 19:07:11.267338  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.267349  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:11.267358  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:11.267423  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:11.306583  303486 cri.go:89] found id: ""
	I0920 19:07:11.306614  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.306623  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:11.306631  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:11.306698  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:11.348162  303486 cri.go:89] found id: ""
	I0920 19:07:11.348188  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.348196  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:11.348203  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:11.348257  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:11.383612  303486 cri.go:89] found id: ""
	I0920 19:07:11.383649  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.383660  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:11.383669  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:11.383736  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:11.417538  303486 cri.go:89] found id: ""
	I0920 19:07:11.417575  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.417583  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:11.417593  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:11.417609  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:11.470242  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:11.470282  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:11.485448  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:11.485480  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:11.559466  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:11.559495  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:11.559513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:11.636080  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:11.636133  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:11.783461  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:13.783785  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:11.462940  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:13.963340  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:14.609611  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:17.105222  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:14.177278  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:14.190413  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:14.190483  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:14.224238  303486 cri.go:89] found id: ""
	I0920 19:07:14.224264  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.224272  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:14.224278  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:14.224330  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:14.265253  303486 cri.go:89] found id: ""
	I0920 19:07:14.265285  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.265297  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:14.265304  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:14.265357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:14.300591  303486 cri.go:89] found id: ""
	I0920 19:07:14.300619  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.300633  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:14.300639  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:14.300695  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:14.335638  303486 cri.go:89] found id: ""
	I0920 19:07:14.335669  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.335677  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:14.335683  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:14.335735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:14.369291  303486 cri.go:89] found id: ""
	I0920 19:07:14.369328  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.369336  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:14.369344  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:14.369397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:14.404913  303486 cri.go:89] found id: ""
	I0920 19:07:14.404947  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.404958  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:14.404967  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:14.405034  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:14.438793  303486 cri.go:89] found id: ""
	I0920 19:07:14.438834  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.438845  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:14.438856  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:14.438926  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:14.475268  303486 cri.go:89] found id: ""
	I0920 19:07:14.475297  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.475305  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:14.475321  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:14.475342  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:14.528066  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:14.528126  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:14.542850  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:14.542891  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:14.612772  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:14.612800  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:14.612819  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:14.694528  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:14.694579  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:17.234389  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:17.247479  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:17.247544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:17.285461  303486 cri.go:89] found id: ""
	I0920 19:07:17.285488  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.285496  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:17.285502  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:17.285553  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:17.320580  303486 cri.go:89] found id: ""
	I0920 19:07:17.320606  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.320614  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:17.320620  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:17.320677  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:17.356405  303486 cri.go:89] found id: ""
	I0920 19:07:17.356440  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.356462  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:17.356471  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:17.356526  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:17.391268  303486 cri.go:89] found id: ""
	I0920 19:07:17.391301  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.391309  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:17.391316  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:17.391381  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:17.429886  303486 cri.go:89] found id: ""
	I0920 19:07:17.429938  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.429950  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:17.429959  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:17.430022  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:17.466059  303486 cri.go:89] found id: ""
	I0920 19:07:17.466093  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.466104  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:17.466111  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:17.466176  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:17.501128  303486 cri.go:89] found id: ""
	I0920 19:07:17.501159  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.501168  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:17.501174  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:17.501247  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:17.536969  303486 cri.go:89] found id: ""
	I0920 19:07:17.536999  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.537007  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:17.537016  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:17.537031  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:17.592071  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:17.592119  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:17.609022  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:17.609057  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:17.696393  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:17.696420  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:17.696434  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:17.778077  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:17.778122  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:15.785002  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:18.284101  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:16.463809  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:18.964348  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:19.604758  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:21.608192  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:20.319211  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:20.332158  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:20.332235  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:20.366195  303486 cri.go:89] found id: ""
	I0920 19:07:20.366230  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.366241  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:20.366250  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:20.366313  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:20.401786  303486 cri.go:89] found id: ""
	I0920 19:07:20.401819  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.401829  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:20.401846  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:20.401943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:20.433684  303486 cri.go:89] found id: ""
	I0920 19:07:20.433711  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.433719  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:20.433725  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:20.433783  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:20.469495  303486 cri.go:89] found id: ""
	I0920 19:07:20.469524  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.469535  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:20.469543  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:20.469613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:20.502214  303486 cri.go:89] found id: ""
	I0920 19:07:20.502245  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.502256  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:20.502263  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:20.502329  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:20.535829  303486 cri.go:89] found id: ""
	I0920 19:07:20.535867  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.535879  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:20.535887  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:20.535952  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:20.569605  303486 cri.go:89] found id: ""
	I0920 19:07:20.569635  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.569643  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:20.569654  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:20.569726  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:20.603676  303486 cri.go:89] found id: ""
	I0920 19:07:20.603699  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.603706  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:20.603715  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:20.603726  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:20.656645  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:20.656692  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:20.671077  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:20.671107  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:20.740996  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:20.741028  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:20.741046  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:20.820541  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:20.820592  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:23.362973  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:23.380350  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:23.380432  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:23.423145  303486 cri.go:89] found id: ""
	I0920 19:07:23.423183  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.423193  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:23.423202  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:23.423272  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:23.459019  303486 cri.go:89] found id: ""
	I0920 19:07:23.459057  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.459068  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:23.459077  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:23.459144  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:23.502876  303486 cri.go:89] found id: ""
	I0920 19:07:23.502908  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.502920  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:23.502929  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:23.502994  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:23.538440  303486 cri.go:89] found id: ""
	I0920 19:07:23.538471  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.538481  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:23.538489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:23.538552  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:23.575164  303486 cri.go:89] found id: ""
	I0920 19:07:23.575199  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.575211  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:23.575220  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:23.575296  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:23.610449  303486 cri.go:89] found id: ""
	I0920 19:07:23.610480  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.610489  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:23.610495  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:23.610562  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:23.644164  303486 cri.go:89] found id: ""
	I0920 19:07:23.644195  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.644203  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:23.644209  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:23.644275  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:23.684379  303486 cri.go:89] found id: ""
	I0920 19:07:23.684417  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.684428  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:23.684442  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:23.684459  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:23.762838  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:23.762885  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:23.805616  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:23.805650  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:23.857080  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:23.857122  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:23.870602  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:23.870635  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:23.941187  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:20.284264  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:22.284388  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:24.285108  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:21.462493  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:23.467933  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:25.963071  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:24.106087  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:26.605442  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:26.441571  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:26.455091  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:26.455185  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:26.489658  303486 cri.go:89] found id: ""
	I0920 19:07:26.489696  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.489707  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:26.489716  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:26.489773  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:26.528829  303486 cri.go:89] found id: ""
	I0920 19:07:26.528865  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.528878  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:26.528886  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:26.528966  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:26.568402  303486 cri.go:89] found id: ""
	I0920 19:07:26.568429  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.568443  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:26.568450  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:26.568503  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:26.606654  303486 cri.go:89] found id: ""
	I0920 19:07:26.606683  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.606693  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:26.606701  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:26.606764  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:26.640825  303486 cri.go:89] found id: ""
	I0920 19:07:26.640856  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.640864  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:26.640871  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:26.640934  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:26.677023  303486 cri.go:89] found id: ""
	I0920 19:07:26.677054  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.677062  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:26.677068  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:26.677123  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:26.712921  303486 cri.go:89] found id: ""
	I0920 19:07:26.712956  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.712964  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:26.712971  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:26.713031  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:26.747750  303486 cri.go:89] found id: ""
	I0920 19:07:26.747778  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.747786  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:26.747796  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:26.747810  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:26.799240  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:26.799283  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:26.813197  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:26.813233  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:26.882751  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:26.882780  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:26.882799  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:26.965108  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:26.965146  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:26.784306  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:29.283573  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:28.461526  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:30.462242  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:28.606602  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:31.106657  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:29.503960  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:29.516601  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:29.516669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:29.555581  303486 cri.go:89] found id: ""
	I0920 19:07:29.555622  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.555632  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:29.555640  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:29.555711  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:29.593858  303486 cri.go:89] found id: ""
	I0920 19:07:29.593885  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.593923  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:29.593937  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:29.593990  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:29.629507  303486 cri.go:89] found id: ""
	I0920 19:07:29.629538  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.629548  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:29.629557  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:29.629616  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:29.662880  303486 cri.go:89] found id: ""
	I0920 19:07:29.662913  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.662921  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:29.662928  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:29.662976  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:29.695422  303486 cri.go:89] found id: ""
	I0920 19:07:29.695448  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.695458  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:29.695466  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:29.695531  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:29.730641  303486 cri.go:89] found id: ""
	I0920 19:07:29.730673  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.730685  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:29.730693  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:29.730756  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:29.764186  303486 cri.go:89] found id: ""
	I0920 19:07:29.764220  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.764229  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:29.764238  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:29.764302  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:29.804146  303486 cri.go:89] found id: ""
	I0920 19:07:29.804174  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.804182  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:29.804191  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:29.804204  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:29.885573  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:29.885633  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:29.924619  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:29.924667  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:29.978187  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:29.978230  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:29.992161  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:29.992190  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:30.069767  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:32.570197  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:32.583160  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:32.583244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:32.620842  303486 cri.go:89] found id: ""
	I0920 19:07:32.620870  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.620881  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:32.620899  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:32.620958  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:32.657169  303486 cri.go:89] found id: ""
	I0920 19:07:32.657205  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.657216  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:32.657225  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:32.657292  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:32.694773  303486 cri.go:89] found id: ""
	I0920 19:07:32.694802  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.694809  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:32.694815  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:32.694882  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:32.733318  303486 cri.go:89] found id: ""
	I0920 19:07:32.733350  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.733360  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:32.733370  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:32.733436  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:32.766019  303486 cri.go:89] found id: ""
	I0920 19:07:32.766052  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.766062  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:32.766070  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:32.766138  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:32.801412  303486 cri.go:89] found id: ""
	I0920 19:07:32.801443  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.801454  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:32.801463  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:32.801533  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:32.833743  303486 cri.go:89] found id: ""
	I0920 19:07:32.833771  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.833779  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:32.833787  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:32.833847  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:32.866775  303486 cri.go:89] found id: ""
	I0920 19:07:32.866803  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.866811  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:32.866821  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:32.866839  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:32.919257  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:32.919310  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:32.933554  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:32.933602  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:33.002657  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:33.002702  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:33.002721  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:33.081271  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:33.081316  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:31.284488  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:33.782998  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:32.462645  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:34.963285  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:33.609072  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:36.107460  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:35.627131  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:35.640958  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:35.641032  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:35.675943  303486 cri.go:89] found id: ""
	I0920 19:07:35.675976  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.675984  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:35.675991  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:35.676044  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:35.710075  303486 cri.go:89] found id: ""
	I0920 19:07:35.710104  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.710116  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:35.710124  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:35.710194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:35.747890  303486 cri.go:89] found id: ""
	I0920 19:07:35.747920  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.747931  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:35.747939  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:35.748004  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:35.786197  303486 cri.go:89] found id: ""
	I0920 19:07:35.786231  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.786242  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:35.786252  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:35.786314  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:35.819109  303486 cri.go:89] found id: ""
	I0920 19:07:35.819146  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.819158  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:35.819168  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:35.819244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:35.853244  303486 cri.go:89] found id: ""
	I0920 19:07:35.853282  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.853292  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:35.853301  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:35.853378  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:35.886864  303486 cri.go:89] found id: ""
	I0920 19:07:35.886897  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.886908  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:35.886917  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:35.886986  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:35.920872  303486 cri.go:89] found id: ""
	I0920 19:07:35.920906  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.920917  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:35.920939  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:35.920957  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:35.998741  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:35.998794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:36.040681  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:36.040720  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:36.095848  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:36.095909  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:36.110903  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:36.110939  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:36.186658  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:38.687762  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:38.701640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:38.701708  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:38.734908  303486 cri.go:89] found id: ""
	I0920 19:07:38.734946  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.734956  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:38.734966  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:38.735031  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:38.768062  303486 cri.go:89] found id: ""
	I0920 19:07:38.768100  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.768112  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:38.768120  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:38.768188  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:38.800881  303486 cri.go:89] found id: ""
	I0920 19:07:38.800915  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.800927  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:38.800936  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:38.801004  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:38.835119  303486 cri.go:89] found id: ""
	I0920 19:07:38.835148  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.835156  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:38.835164  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:38.835223  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:38.872677  303486 cri.go:89] found id: ""
	I0920 19:07:38.872712  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.872723  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:38.872733  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:38.872807  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:38.913921  303486 cri.go:89] found id: ""
	I0920 19:07:38.913955  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.913965  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:38.913972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:38.914029  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:35.783443  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.284549  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:36.963668  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.963893  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.608347  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:41.106313  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.951849  303486 cri.go:89] found id: ""
	I0920 19:07:38.951882  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.951893  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:38.951902  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:38.951972  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:38.988117  303486 cri.go:89] found id: ""
	I0920 19:07:38.988149  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.988161  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:38.988177  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:38.988191  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:39.028804  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:39.028843  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:39.083374  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:39.083427  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:39.097434  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:39.097463  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:39.172185  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:39.172213  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:39.172226  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:41.756648  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:41.772358  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:41.772432  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:41.809067  303486 cri.go:89] found id: ""
	I0920 19:07:41.809109  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.809123  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:41.809132  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:41.809191  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:41.853413  303486 cri.go:89] found id: ""
	I0920 19:07:41.853445  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.853457  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:41.853465  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:41.853524  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:41.891536  303486 cri.go:89] found id: ""
	I0920 19:07:41.891569  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.891580  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:41.891588  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:41.891668  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:41.931046  303486 cri.go:89] found id: ""
	I0920 19:07:41.931085  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.931093  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:41.931099  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:41.931155  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:41.968120  303486 cri.go:89] found id: ""
	I0920 19:07:41.968152  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.968164  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:41.968172  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:41.968240  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:42.002478  303486 cri.go:89] found id: ""
	I0920 19:07:42.002512  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.002523  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:42.002532  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:42.002599  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:42.038031  303486 cri.go:89] found id: ""
	I0920 19:07:42.038067  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.038080  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:42.038087  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:42.038150  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:42.072124  303486 cri.go:89] found id: ""
	I0920 19:07:42.072155  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.072166  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:42.072178  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:42.072195  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:42.128217  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:42.128259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:42.142291  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:42.142322  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:42.215278  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:42.215305  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:42.215324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:42.293431  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:42.293476  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:40.784191  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.283580  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:41.463429  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.963059  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.608790  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:46.105338  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:44.836094  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:44.850327  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:44.850397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:44.884595  303486 cri.go:89] found id: ""
	I0920 19:07:44.884624  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.884632  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:44.884639  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:44.884711  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:44.917727  303486 cri.go:89] found id: ""
	I0920 19:07:44.917754  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.917763  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:44.917769  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:44.917837  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:44.955821  303486 cri.go:89] found id: ""
	I0920 19:07:44.955860  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.955871  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:44.955879  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:44.955937  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:44.994543  303486 cri.go:89] found id: ""
	I0920 19:07:44.994579  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.994590  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:44.994598  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:44.994651  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:45.031839  303486 cri.go:89] found id: ""
	I0920 19:07:45.031877  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.031888  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:45.031896  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:45.031962  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:45.070554  303486 cri.go:89] found id: ""
	I0920 19:07:45.070588  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.070601  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:45.070609  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:45.070678  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:45.108727  303486 cri.go:89] found id: ""
	I0920 19:07:45.108760  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.108771  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:45.108779  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:45.108855  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:45.144045  303486 cri.go:89] found id: ""
	I0920 19:07:45.144075  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.144083  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:45.144094  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:45.144108  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:45.185800  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:45.185834  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:45.238364  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:45.238410  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:45.252111  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:45.252145  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:45.329009  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:45.329036  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:45.329051  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:47.912910  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:47.926378  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:47.926458  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:47.961067  303486 cri.go:89] found id: ""
	I0920 19:07:47.961094  303486 logs.go:276] 0 containers: []
	W0920 19:07:47.961103  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:47.961111  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:47.961172  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:48.006680  303486 cri.go:89] found id: ""
	I0920 19:07:48.006717  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.006729  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:48.006738  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:48.006805  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:48.042230  303486 cri.go:89] found id: ""
	I0920 19:07:48.042261  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.042272  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:48.042281  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:48.042349  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:48.080779  303486 cri.go:89] found id: ""
	I0920 19:07:48.080836  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.080850  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:48.080860  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:48.080931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:48.119439  303486 cri.go:89] found id: ""
	I0920 19:07:48.119469  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.119477  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:48.119483  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:48.119536  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:48.156219  303486 cri.go:89] found id: ""
	I0920 19:07:48.156258  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.156269  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:48.156279  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:48.156354  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:48.192112  303486 cri.go:89] found id: ""
	I0920 19:07:48.192151  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.192162  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:48.192170  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:48.192240  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:48.228916  303486 cri.go:89] found id: ""
	I0920 19:07:48.228958  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.228968  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:48.228981  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:48.229003  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:48.284073  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:48.284115  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:48.297677  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:48.297713  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:48.374834  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:48.374860  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:48.374876  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:48.455468  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:48.455512  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:45.284055  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:47.783744  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:46.461832  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:48.462980  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:50.463485  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:48.605035  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:51.105952  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:50.998354  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:51.012827  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:51.012904  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:51.046701  303486 cri.go:89] found id: ""
	I0920 19:07:51.046739  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.046750  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:51.046758  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:51.046827  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:51.083829  303486 cri.go:89] found id: ""
	I0920 19:07:51.083867  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.083878  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:51.083891  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:51.083965  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:51.124126  303486 cri.go:89] found id: ""
	I0920 19:07:51.124170  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.124180  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:51.124187  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:51.124254  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:51.159141  303486 cri.go:89] found id: ""
	I0920 19:07:51.159175  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.159184  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:51.159190  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:51.159253  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:51.192793  303486 cri.go:89] found id: ""
	I0920 19:07:51.192829  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.192840  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:51.192863  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:51.192938  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:51.225489  303486 cri.go:89] found id: ""
	I0920 19:07:51.225515  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.225524  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:51.225530  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:51.225582  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:51.258256  303486 cri.go:89] found id: ""
	I0920 19:07:51.258283  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.258294  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:51.258301  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:51.258363  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:51.292474  303486 cri.go:89] found id: ""
	I0920 19:07:51.292504  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.292512  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:51.292522  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:51.292537  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:51.331386  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:51.331422  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:51.385136  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:51.385182  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:51.400792  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:51.400828  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:51.492771  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:51.492795  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:51.492810  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:49.784132  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:52.284075  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:54.284870  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:52.963813  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:55.464095  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:53.607259  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:56.106592  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:54.074889  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:54.088453  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:54.088534  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:54.125096  303486 cri.go:89] found id: ""
	I0920 19:07:54.125138  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.125159  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:54.125166  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:54.125231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:54.159630  303486 cri.go:89] found id: ""
	I0920 19:07:54.159665  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.159676  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:54.159685  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:54.159759  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:54.195919  303486 cri.go:89] found id: ""
	I0920 19:07:54.195951  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.195965  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:54.195972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:54.196042  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:54.230294  303486 cri.go:89] found id: ""
	I0920 19:07:54.230323  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.230332  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:54.230339  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:54.230396  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:54.266764  303486 cri.go:89] found id: ""
	I0920 19:07:54.266793  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.266800  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:54.266807  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:54.266865  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:54.300704  303486 cri.go:89] found id: ""
	I0920 19:07:54.300731  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.300741  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:54.300750  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:54.300817  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:54.334447  303486 cri.go:89] found id: ""
	I0920 19:07:54.334473  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.334480  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:54.334487  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:54.334546  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:54.369814  303486 cri.go:89] found id: ""
	I0920 19:07:54.369858  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.369866  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:54.369878  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:54.369890  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:54.423088  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:54.423135  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:54.436770  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:54.436801  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:54.510731  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:54.510757  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:54.510773  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:54.593041  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:54.593091  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:57.134030  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:57.147605  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:57.147674  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:57.202662  303486 cri.go:89] found id: ""
	I0920 19:07:57.202690  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.202699  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:57.202705  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:57.202757  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:57.236448  303486 cri.go:89] found id: ""
	I0920 19:07:57.236476  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.236484  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:57.236493  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:57.236558  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:57.269450  303486 cri.go:89] found id: ""
	I0920 19:07:57.269478  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.269485  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:57.269491  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:57.269544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:57.305749  303486 cri.go:89] found id: ""
	I0920 19:07:57.305784  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.305795  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:57.305806  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:57.305877  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:57.339802  303486 cri.go:89] found id: ""
	I0920 19:07:57.339844  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.339857  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:57.339866  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:57.339942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:57.371929  303486 cri.go:89] found id: ""
	I0920 19:07:57.371962  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.371971  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:57.371980  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:57.372051  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:57.405749  303486 cri.go:89] found id: ""
	I0920 19:07:57.405789  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.405802  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:57.405812  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:57.405888  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:57.439259  303486 cri.go:89] found id: ""
	I0920 19:07:57.439291  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.439300  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:57.439310  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:57.439323  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:57.491405  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:57.491450  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:57.505992  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:57.506027  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:57.580598  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:57.580623  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:57.580638  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:57.659475  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:57.659513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:56.783867  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:58.783944  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:57.465789  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:59.963589  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:58.606492  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:01.105967  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:00.201478  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:00.217162  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:00.217228  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:00.252219  303486 cri.go:89] found id: ""
	I0920 19:08:00.252247  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.252256  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:00.252263  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:00.252334  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:00.287244  303486 cri.go:89] found id: ""
	I0920 19:08:00.287283  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.287295  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:00.287302  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:00.287367  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:00.325785  303486 cri.go:89] found id: ""
	I0920 19:08:00.325818  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.325829  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:00.325839  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:00.325931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:00.359718  303486 cri.go:89] found id: ""
	I0920 19:08:00.359747  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.359757  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:00.359766  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:00.359847  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:00.399105  303486 cri.go:89] found id: ""
	I0920 19:08:00.399147  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.399156  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:00.399163  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:00.399227  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:00.433647  303486 cri.go:89] found id: ""
	I0920 19:08:00.433675  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.433683  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:00.433692  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:00.433756  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:00.467771  303486 cri.go:89] found id: ""
	I0920 19:08:00.467820  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.467832  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:00.467841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:00.467911  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:00.511320  303486 cri.go:89] found id: ""
	I0920 19:08:00.511363  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.511376  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:00.511392  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:00.511414  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:00.594669  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:00.594703  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:00.594723  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:00.672747  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:00.672800  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:00.710001  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:00.710049  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:00.760333  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:00.760378  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:03.274393  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:03.289260  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:03.289352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:03.327884  303486 cri.go:89] found id: ""
	I0920 19:08:03.327919  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.327932  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:03.327942  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:03.328015  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:03.367259  303486 cri.go:89] found id: ""
	I0920 19:08:03.367289  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.367297  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:03.367303  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:03.367361  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:03.405843  303486 cri.go:89] found id: ""
	I0920 19:08:03.405899  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.405932  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:03.405942  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:03.406056  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:03.441026  303486 cri.go:89] found id: ""
	I0920 19:08:03.441058  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.441069  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:03.441078  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:03.441147  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:03.477213  303486 cri.go:89] found id: ""
	I0920 19:08:03.477249  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.477261  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:03.477327  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:03.477415  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:03.515843  303486 cri.go:89] found id: ""
	I0920 19:08:03.515880  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.515888  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:03.515895  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:03.515945  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:03.566972  303486 cri.go:89] found id: ""
	I0920 19:08:03.567009  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.567020  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:03.567028  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:03.567097  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:03.616957  303486 cri.go:89] found id: ""
	I0920 19:08:03.617000  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.617013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:03.617029  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:03.617048  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:03.683140  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:03.683192  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:03.697225  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:03.697267  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:03.770430  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:03.770455  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:03.770478  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:03.848796  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:03.848836  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:01.284245  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:03.284437  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:01.964058  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:04.462786  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:03.607506  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.106008  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.387706  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:06.401600  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:06.401669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:06.437854  303486 cri.go:89] found id: ""
	I0920 19:08:06.437890  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.437917  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:06.437926  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:06.437993  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:06.472617  303486 cri.go:89] found id: ""
	I0920 19:08:06.472647  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.472655  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:06.472662  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:06.472718  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:06.510083  303486 cri.go:89] found id: ""
	I0920 19:08:06.510118  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.510131  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:06.510140  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:06.510212  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:06.546388  303486 cri.go:89] found id: ""
	I0920 19:08:06.546418  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.546427  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:06.546434  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:06.546485  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:06.584043  303486 cri.go:89] found id: ""
	I0920 19:08:06.584084  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.584096  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:06.584106  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:06.584182  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:06.622118  303486 cri.go:89] found id: ""
	I0920 19:08:06.622147  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.622155  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:06.622161  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:06.622217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:06.655513  303486 cri.go:89] found id: ""
	I0920 19:08:06.655552  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.655585  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:06.655593  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:06.655657  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:06.690286  303486 cri.go:89] found id: ""
	I0920 19:08:06.690324  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.690336  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:06.690350  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:06.690368  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:06.729229  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:06.729259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:06.780368  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:06.780411  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:06.794746  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:06.794782  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:06.866918  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:06.866944  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:06.866967  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:05.784123  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.284383  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.462855  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.466867  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:10.963736  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.106490  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:10.606291  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:09.451583  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:09.465111  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:09.465178  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:09.497679  303486 cri.go:89] found id: ""
	I0920 19:08:09.497713  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.497725  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:09.497733  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:09.497797  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:09.535297  303486 cri.go:89] found id: ""
	I0920 19:08:09.535334  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.535345  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:09.535353  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:09.535427  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:09.572449  303486 cri.go:89] found id: ""
	I0920 19:08:09.572482  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.572491  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:09.572498  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:09.572608  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:09.612672  303486 cri.go:89] found id: ""
	I0920 19:08:09.612697  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.612705  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:09.612711  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:09.612797  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:09.654366  303486 cri.go:89] found id: ""
	I0920 19:08:09.654399  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.654408  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:09.654415  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:09.654470  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:09.694825  303486 cri.go:89] found id: ""
	I0920 19:08:09.694858  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.694870  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:09.694878  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:09.694942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:09.731618  303486 cri.go:89] found id: ""
	I0920 19:08:09.731682  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.731693  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:09.731702  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:09.731775  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:09.766717  303486 cri.go:89] found id: ""
	I0920 19:08:09.766755  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.766765  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:09.766779  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:09.766794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:09.823505  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:09.823549  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:09.837622  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:09.837658  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:09.919105  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:09.919139  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:09.919156  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:10.000899  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:10.000943  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:12.542974  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:12.557265  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:12.557335  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:12.594099  303486 cri.go:89] found id: ""
	I0920 19:08:12.594126  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.594134  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:12.594140  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:12.594199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:12.627271  303486 cri.go:89] found id: ""
	I0920 19:08:12.627301  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.627308  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:12.627314  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:12.627366  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:12.661225  303486 cri.go:89] found id: ""
	I0920 19:08:12.661256  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.661265  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:12.661272  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:12.661332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:12.701381  303486 cri.go:89] found id: ""
	I0920 19:08:12.701424  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.701437  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:12.701447  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:12.701524  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:12.739189  303486 cri.go:89] found id: ""
	I0920 19:08:12.739227  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.739235  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:12.739246  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:12.739299  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:12.780931  303486 cri.go:89] found id: ""
	I0920 19:08:12.780958  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.781055  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:12.781068  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:12.781124  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:12.818097  303486 cri.go:89] found id: ""
	I0920 19:08:12.818137  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.818150  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:12.818161  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:12.818294  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:12.852925  303486 cri.go:89] found id: ""
	I0920 19:08:12.852957  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.852965  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:12.852975  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:12.852990  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:12.924746  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:12.924774  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:12.924791  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:13.005668  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:13.005718  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:13.044327  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:13.044359  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:13.094788  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:13.094833  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:10.284510  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:12.783546  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:12.964694  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.463615  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:13.105052  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.604922  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.611965  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:15.625857  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:15.625960  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:15.662138  303486 cri.go:89] found id: ""
	I0920 19:08:15.662169  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.662177  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:15.662184  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:15.662261  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:15.696000  303486 cri.go:89] found id: ""
	I0920 19:08:15.696067  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.696100  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:15.696115  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:15.696234  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:15.735594  303486 cri.go:89] found id: ""
	I0920 19:08:15.735625  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.735633  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:15.735640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:15.735699  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:15.774666  303486 cri.go:89] found id: ""
	I0920 19:08:15.774693  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.774703  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:15.774712  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:15.774777  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:15.810754  303486 cri.go:89] found id: ""
	I0920 19:08:15.810799  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.810811  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:15.810820  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:15.810884  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:15.846709  303486 cri.go:89] found id: ""
	I0920 19:08:15.846739  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.846748  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:15.846757  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:15.846819  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:15.880798  303486 cri.go:89] found id: ""
	I0920 19:08:15.880825  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.880833  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:15.880839  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:15.880895  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:15.915119  303486 cri.go:89] found id: ""
	I0920 19:08:15.915150  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.915159  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:15.915170  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:15.915186  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:15.966048  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:15.966087  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:15.979287  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:15.979322  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:16.052129  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:16.052163  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:16.052180  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:16.137743  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:16.137788  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:18.678389  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:18.693073  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:18.693152  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:18.734909  303486 cri.go:89] found id: ""
	I0920 19:08:18.734943  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.734954  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:18.734962  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:18.735028  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:18.773472  303486 cri.go:89] found id: ""
	I0920 19:08:18.773506  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.773517  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:18.773525  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:18.773620  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:18.812184  303486 cri.go:89] found id: ""
	I0920 19:08:18.812218  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.812228  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:18.812236  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:18.812305  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:18.846569  303486 cri.go:89] found id: ""
	I0920 19:08:18.846608  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.846619  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:18.846627  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:18.846700  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:18.881794  303486 cri.go:89] found id: ""
	I0920 19:08:18.881836  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.881862  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:18.881870  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:18.881943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:18.919657  303486 cri.go:89] found id: ""
	I0920 19:08:18.919688  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.919698  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:18.919708  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:18.919774  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:14.784734  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:17.283590  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:19.284056  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:17.962913  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:20.462190  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:18.105736  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:20.106314  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:22.605231  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:18.955117  303486 cri.go:89] found id: ""
	I0920 19:08:18.955146  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.955157  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:18.955166  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:18.955243  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:18.992389  303486 cri.go:89] found id: ""
	I0920 19:08:18.992422  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.992430  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:18.992444  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:18.992460  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:19.070374  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:19.070417  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:19.110793  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:19.110825  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:19.163783  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:19.163830  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:19.177348  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:19.177387  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:19.249469  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:21.749644  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:21.764920  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:21.765006  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:21.803443  303486 cri.go:89] found id: ""
	I0920 19:08:21.803473  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.803481  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:21.803489  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:21.803545  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:21.844552  303486 cri.go:89] found id: ""
	I0920 19:08:21.844582  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.844593  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:21.844601  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:21.844672  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:21.878979  303486 cri.go:89] found id: ""
	I0920 19:08:21.879007  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.879017  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:21.879029  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:21.879099  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:21.915745  303486 cri.go:89] found id: ""
	I0920 19:08:21.915773  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.915783  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:21.915794  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:21.915865  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:21.948999  303486 cri.go:89] found id: ""
	I0920 19:08:21.949031  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.949043  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:21.949052  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:21.949118  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:21.984238  303486 cri.go:89] found id: ""
	I0920 19:08:21.984269  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.984277  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:21.984284  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:21.984357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:22.018581  303486 cri.go:89] found id: ""
	I0920 19:08:22.018610  303486 logs.go:276] 0 containers: []
	W0920 19:08:22.018620  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:22.018628  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:22.018694  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:22.051868  303486 cri.go:89] found id: ""
	I0920 19:08:22.051903  303486 logs.go:276] 0 containers: []
	W0920 19:08:22.051913  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:22.051925  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:22.051942  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:22.106711  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:22.106756  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:22.120910  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:22.120940  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:22.196564  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:22.196591  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:22.196608  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:22.275235  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:22.275288  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:21.785129  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.284359  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:22.463122  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.962694  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:25.105050  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:27.105237  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.821956  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:24.836846  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:24.836918  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:24.878371  303486 cri.go:89] found id: ""
	I0920 19:08:24.878398  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.878406  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:24.878413  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:24.878464  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:24.911450  303486 cri.go:89] found id: ""
	I0920 19:08:24.911480  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.911489  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:24.911497  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:24.911590  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:24.949248  303486 cri.go:89] found id: ""
	I0920 19:08:24.949281  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.949289  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:24.949298  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:24.949352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:24.987899  303486 cri.go:89] found id: ""
	I0920 19:08:24.987932  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.987939  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:24.987948  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:24.988011  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:25.020589  303486 cri.go:89] found id: ""
	I0920 19:08:25.020627  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.020638  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:25.020646  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:25.020701  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:25.060223  303486 cri.go:89] found id: ""
	I0920 19:08:25.060250  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.060258  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:25.060266  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:25.060331  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:25.099111  303486 cri.go:89] found id: ""
	I0920 19:08:25.099141  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.099151  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:25.099160  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:25.099242  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:25.136055  303486 cri.go:89] found id: ""
	I0920 19:08:25.136089  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.136098  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:25.136118  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:25.136135  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:25.187619  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:25.187658  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:25.200983  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:25.201016  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:25.270746  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:25.270778  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:25.270795  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:25.350009  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:25.350050  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:27.889864  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:27.903156  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:27.903231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:27.935087  303486 cri.go:89] found id: ""
	I0920 19:08:27.935118  303486 logs.go:276] 0 containers: []
	W0920 19:08:27.935128  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:27.935138  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:27.935199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:27.970451  303486 cri.go:89] found id: ""
	I0920 19:08:27.970479  303486 logs.go:276] 0 containers: []
	W0920 19:08:27.970487  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:27.970494  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:27.970545  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:28.004931  303486 cri.go:89] found id: ""
	I0920 19:08:28.004980  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.004992  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:28.005002  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:28.005068  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:28.039438  303486 cri.go:89] found id: ""
	I0920 19:08:28.039470  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.039478  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:28.039485  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:28.039535  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:28.076023  303486 cri.go:89] found id: ""
	I0920 19:08:28.076050  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.076058  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:28.076064  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:28.076131  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:28.114726  303486 cri.go:89] found id: ""
	I0920 19:08:28.114761  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.114772  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:28.114781  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:28.114846  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:28.150790  303486 cri.go:89] found id: ""
	I0920 19:08:28.150822  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.150832  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:28.150841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:28.150908  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:28.186576  303486 cri.go:89] found id: ""
	I0920 19:08:28.186606  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.186614  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:28.186626  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:28.186648  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:28.240939  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:28.240984  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:28.255267  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:28.255304  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:28.327773  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:28.327797  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:28.327809  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:28.418011  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:28.418055  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:26.785099  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:29.284297  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:26.962825  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:28.963261  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:30.963575  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:29.605453  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:32.104848  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:30.962398  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:30.975385  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:30.975471  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:31.009898  303486 cri.go:89] found id: ""
	I0920 19:08:31.009952  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.009964  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:31.009973  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:31.010044  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:31.043639  303486 cri.go:89] found id: ""
	I0920 19:08:31.043670  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.043679  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:31.043689  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:31.043758  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:31.077709  303486 cri.go:89] found id: ""
	I0920 19:08:31.077745  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.077753  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:31.077759  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:31.077818  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:31.111117  303486 cri.go:89] found id: ""
	I0920 19:08:31.111150  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.111160  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:31.111168  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:31.111234  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:31.143888  303486 cri.go:89] found id: ""
	I0920 19:08:31.143921  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.143933  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:31.143942  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:31.144014  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:31.176694  303486 cri.go:89] found id: ""
	I0920 19:08:31.176729  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.176742  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:31.176751  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:31.176815  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:31.213794  303486 cri.go:89] found id: ""
	I0920 19:08:31.213832  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.213844  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:31.213854  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:31.213946  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:31.250160  303486 cri.go:89] found id: ""
	I0920 19:08:31.250219  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.250230  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:31.250244  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:31.250261  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:31.263748  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:31.263784  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:31.337719  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:31.337749  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:31.337762  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:31.420398  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:31.420446  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:31.459992  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:31.460030  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:31.284818  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:33.783288  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:33.462900  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:35.463122  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:34.105758  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:36.604917  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:34.014229  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:34.028129  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:34.028194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:34.060793  303486 cri.go:89] found id: ""
	I0920 19:08:34.060832  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.060850  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:34.060859  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:34.060919  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:34.094440  303486 cri.go:89] found id: ""
	I0920 19:08:34.094467  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.094475  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:34.094481  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:34.094544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:34.128824  303486 cri.go:89] found id: ""
	I0920 19:08:34.128861  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.128872  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:34.128881  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:34.128948  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:34.160861  303486 cri.go:89] found id: ""
	I0920 19:08:34.160894  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.160903  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:34.160911  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:34.160967  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:34.196897  303486 cri.go:89] found id: ""
	I0920 19:08:34.196933  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.196952  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:34.196958  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:34.197020  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:34.229083  303486 cri.go:89] found id: ""
	I0920 19:08:34.229115  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.229125  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:34.229134  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:34.229205  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:34.261877  303486 cri.go:89] found id: ""
	I0920 19:08:34.261922  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.261933  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:34.261941  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:34.262008  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:34.296145  303486 cri.go:89] found id: ""
	I0920 19:08:34.296177  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.296189  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:34.296199  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:34.296214  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:34.361598  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:34.361624  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:34.361641  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:34.441067  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:34.441110  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:34.483333  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:34.483362  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:34.538345  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:34.538388  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:37.053155  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:37.067157  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:37.067230  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:37.101432  303486 cri.go:89] found id: ""
	I0920 19:08:37.101466  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.101476  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:37.101485  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:37.101550  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:37.134375  303486 cri.go:89] found id: ""
	I0920 19:08:37.134408  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.134416  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:37.134423  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:37.134487  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:37.167049  303486 cri.go:89] found id: ""
	I0920 19:08:37.167087  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.167099  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:37.167107  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:37.167175  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:37.209358  303486 cri.go:89] found id: ""
	I0920 19:08:37.209387  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.209397  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:37.209405  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:37.209470  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:37.243227  303486 cri.go:89] found id: ""
	I0920 19:08:37.243261  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.243272  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:37.243281  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:37.243332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:37.276546  303486 cri.go:89] found id: ""
	I0920 19:08:37.276596  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.276607  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:37.276626  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:37.276688  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:37.311233  303486 cri.go:89] found id: ""
	I0920 19:08:37.311268  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.311279  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:37.311287  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:37.311352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:37.349970  303486 cri.go:89] found id: ""
	I0920 19:08:37.350003  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.350013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:37.350025  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:37.350041  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:37.399405  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:37.399445  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:37.423764  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:37.423800  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:37.498797  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:37.498826  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:37.498841  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:37.575521  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:37.575566  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:35.783897  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:37.784496  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:37.463224  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:39.463445  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:38.605444  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:40.606712  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:40.118650  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:40.131967  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:40.132051  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:40.165313  303486 cri.go:89] found id: ""
	I0920 19:08:40.165349  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.165358  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:40.165366  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:40.165439  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:40.197194  303486 cri.go:89] found id: ""
	I0920 19:08:40.197223  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.197232  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:40.197238  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:40.197289  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:40.236769  303486 cri.go:89] found id: ""
	I0920 19:08:40.236800  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.236810  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:40.236819  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:40.236888  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:40.271960  303486 cri.go:89] found id: ""
	I0920 19:08:40.271984  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.271992  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:40.271998  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:40.272049  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:40.307874  303486 cri.go:89] found id: ""
	I0920 19:08:40.307909  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.307917  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:40.307923  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:40.307982  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:40.342128  303486 cri.go:89] found id: ""
	I0920 19:08:40.342160  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.342168  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:40.342175  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:40.342233  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:40.381493  303486 cri.go:89] found id: ""
	I0920 19:08:40.381529  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.381542  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:40.381551  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:40.381617  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:40.415164  303486 cri.go:89] found id: ""
	I0920 19:08:40.415199  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.415211  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:40.415222  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:40.415238  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:40.488306  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:40.488330  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:40.488350  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:40.567193  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:40.567235  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:40.607256  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:40.607287  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:40.659504  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:40.659542  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:43.174043  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:43.188690  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:43.188790  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:43.227223  303486 cri.go:89] found id: ""
	I0920 19:08:43.227251  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.227259  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:43.227267  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:43.227356  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:43.260099  303486 cri.go:89] found id: ""
	I0920 19:08:43.260128  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.260137  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:43.260143  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:43.260195  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:43.297846  303486 cri.go:89] found id: ""
	I0920 19:08:43.297875  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.297886  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:43.297894  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:43.297980  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:43.334026  303486 cri.go:89] found id: ""
	I0920 19:08:43.334061  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.334070  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:43.334078  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:43.334147  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:43.367765  303486 cri.go:89] found id: ""
	I0920 19:08:43.367795  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.367806  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:43.367814  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:43.367890  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:43.402722  303486 cri.go:89] found id: ""
	I0920 19:08:43.402766  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.402778  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:43.402787  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:43.402852  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:43.439643  303486 cri.go:89] found id: ""
	I0920 19:08:43.439674  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.439682  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:43.439690  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:43.439742  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:43.475931  303486 cri.go:89] found id: ""
	I0920 19:08:43.475965  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.475976  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:43.475991  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:43.476006  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:43.545694  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:43.545725  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:43.545739  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:43.627493  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:43.627549  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:43.667758  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:43.667794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:43.721803  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:43.721851  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:40.285524  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:42.784336  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:41.962300  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:43.963712  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:45.963766  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:43.105271  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:45.105737  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:47.604667  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:46.237499  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:46.250854  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:46.250925  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:46.288918  303486 cri.go:89] found id: ""
	I0920 19:08:46.288950  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.288957  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:46.288964  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:46.289026  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:46.321113  303486 cri.go:89] found id: ""
	I0920 19:08:46.321149  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.321159  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:46.321168  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:46.321239  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:46.359606  303486 cri.go:89] found id: ""
	I0920 19:08:46.359643  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.359652  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:46.359659  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:46.359729  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:46.397059  303486 cri.go:89] found id: ""
	I0920 19:08:46.397089  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.397098  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:46.397104  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:46.397174  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:46.438224  303486 cri.go:89] found id: ""
	I0920 19:08:46.438261  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.438271  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:46.438279  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:46.438355  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:46.476933  303486 cri.go:89] found id: ""
	I0920 19:08:46.476963  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.476973  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:46.476981  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:46.477047  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:46.522115  303486 cri.go:89] found id: ""
	I0920 19:08:46.522150  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.522160  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:46.522167  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:46.522236  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:46.555508  303486 cri.go:89] found id: ""
	I0920 19:08:46.555541  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.555551  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:46.555565  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:46.555580  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:46.632314  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:46.632358  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:46.672381  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:46.672420  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:46.725777  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:46.725835  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:46.739924  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:46.739959  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:46.816667  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:45.284171  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:47.284420  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.284798  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:48.462088  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:50.463100  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.606279  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:52.105103  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.317620  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:49.331792  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:49.331872  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:49.365417  303486 cri.go:89] found id: ""
	I0920 19:08:49.365457  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.365470  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:49.365479  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:49.365543  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:49.399422  303486 cri.go:89] found id: ""
	I0920 19:08:49.399455  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.399465  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:49.399474  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:49.399532  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:49.433040  303486 cri.go:89] found id: ""
	I0920 19:08:49.433069  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.433076  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:49.433082  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:49.433149  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:49.466865  303486 cri.go:89] found id: ""
	I0920 19:08:49.466897  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.466909  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:49.466917  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:49.466986  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:49.499542  303486 cri.go:89] found id: ""
	I0920 19:08:49.499574  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.499583  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:49.499589  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:49.499639  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:49.534310  303486 cri.go:89] found id: ""
	I0920 19:08:49.534338  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.534346  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:49.534353  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:49.534411  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:49.580271  303486 cri.go:89] found id: ""
	I0920 19:08:49.580297  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.580305  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:49.580312  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:49.580385  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:49.626519  303486 cri.go:89] found id: ""
	I0920 19:08:49.626554  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.626562  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:49.626572  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:49.626587  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:49.682923  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:49.682963  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:49.695859  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:49.695895  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:49.767626  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:49.767669  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:49.767697  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:49.849570  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:49.849614  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:52.387653  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:52.400693  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:52.400757  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:52.434320  303486 cri.go:89] found id: ""
	I0920 19:08:52.434358  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.434369  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:52.434381  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:52.434448  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:52.469167  303486 cri.go:89] found id: ""
	I0920 19:08:52.469202  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.469214  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:52.469222  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:52.469291  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:52.504241  303486 cri.go:89] found id: ""
	I0920 19:08:52.504287  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.504295  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:52.504304  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:52.504367  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:52.539573  303486 cri.go:89] found id: ""
	I0920 19:08:52.539604  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.539613  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:52.539619  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:52.539697  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:52.573794  303486 cri.go:89] found id: ""
	I0920 19:08:52.573821  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.573829  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:52.573834  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:52.573931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:52.607628  303486 cri.go:89] found id: ""
	I0920 19:08:52.607660  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.607670  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:52.607676  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:52.607738  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:52.639088  303486 cri.go:89] found id: ""
	I0920 19:08:52.639121  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.639132  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:52.639140  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:52.639204  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:52.673585  303486 cri.go:89] found id: ""
	I0920 19:08:52.673624  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.673636  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:52.673650  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:52.673667  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:52.726463  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:52.726504  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:52.739520  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:52.739553  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:52.820610  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:52.820638  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:52.820653  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:52.898567  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:52.898612  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:51.783687  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:53.784963  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:52.962326  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:54.963069  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:54.105159  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:56.604367  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:55.440875  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:55.454526  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:55.454602  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:55.490616  303486 cri.go:89] found id: ""
	I0920 19:08:55.490655  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.490664  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:55.490671  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:55.490735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:55.530256  303486 cri.go:89] found id: ""
	I0920 19:08:55.530287  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.530296  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:55.530304  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:55.530357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:55.565209  303486 cri.go:89] found id: ""
	I0920 19:08:55.565242  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.565253  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:55.565260  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:55.565319  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:55.599522  303486 cri.go:89] found id: ""
	I0920 19:08:55.599553  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.599563  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:55.599571  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:55.599634  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:55.634662  303486 cri.go:89] found id: ""
	I0920 19:08:55.634692  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.634700  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:55.634707  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:55.634759  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:55.670326  303486 cri.go:89] found id: ""
	I0920 19:08:55.670361  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.670372  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:55.670379  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:55.670434  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:55.702589  303486 cri.go:89] found id: ""
	I0920 19:08:55.702617  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.702625  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:55.702632  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:55.702694  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:55.737615  303486 cri.go:89] found id: ""
	I0920 19:08:55.737643  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.737653  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:55.737667  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:55.737682  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:55.816827  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:55.816873  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:55.855521  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:55.855550  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:55.905002  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:55.905047  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:55.918292  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:55.918324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:55.987445  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:58.488566  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:58.503898  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:58.504001  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:58.539089  303486 cri.go:89] found id: ""
	I0920 19:08:58.539117  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.539127  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:58.539135  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:58.539199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:58.576432  303486 cri.go:89] found id: ""
	I0920 19:08:58.576459  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.576467  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:58.576473  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:58.576542  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:58.613779  303486 cri.go:89] found id: ""
	I0920 19:08:58.613814  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.613825  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:58.613833  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:58.613932  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:58.648717  303486 cri.go:89] found id: ""
	I0920 19:08:58.648757  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.648768  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:58.648777  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:58.648845  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:58.681533  303486 cri.go:89] found id: ""
	I0920 19:08:58.681568  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.681585  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:58.681593  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:58.681647  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:58.714833  303486 cri.go:89] found id: ""
	I0920 19:08:58.714867  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.714877  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:58.714886  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:58.714951  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:58.755939  303486 cri.go:89] found id: ""
	I0920 19:08:58.755972  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.755980  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:58.755986  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:58.756037  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:58.793195  303486 cri.go:89] found id: ""
	I0920 19:08:58.793229  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.793240  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:58.793252  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:58.793267  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:58.807903  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:58.807939  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:58.873993  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:58.874022  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:58.874042  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:56.283846  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.286474  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:56.963398  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.963513  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.606087  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:01.106199  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.955201  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:58.955249  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:58.994230  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:58.994265  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:01.548403  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:01.561467  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:01.561541  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:01.595339  303486 cri.go:89] found id: ""
	I0920 19:09:01.595374  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.595382  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:01.595388  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:01.595463  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:01.631995  303486 cri.go:89] found id: ""
	I0920 19:09:01.632033  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.632043  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:01.632051  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:01.632119  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:01.667556  303486 cri.go:89] found id: ""
	I0920 19:09:01.667586  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.667596  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:01.667604  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:01.667669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:01.702678  303486 cri.go:89] found id: ""
	I0920 19:09:01.702708  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.702716  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:01.702723  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:01.702786  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:01.739953  303486 cri.go:89] found id: ""
	I0920 19:09:01.739987  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.739999  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:01.740008  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:01.740075  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:01.774188  303486 cri.go:89] found id: ""
	I0920 19:09:01.774222  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.774239  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:01.774249  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:01.774317  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:01.808885  303486 cri.go:89] found id: ""
	I0920 19:09:01.808916  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.808927  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:01.808935  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:01.808997  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:01.842357  303486 cri.go:89] found id: ""
	I0920 19:09:01.842394  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.842404  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:01.842417  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:01.842433  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:01.881750  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:01.881782  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:01.932190  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:01.932236  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:01.946305  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:01.946337  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:02.020099  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:02.020127  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:02.020141  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:00.784428  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.284109  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:01.462613  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.962360  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:05.963735  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.605623  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:06.104994  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:04.601186  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:04.614292  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:04.614374  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:04.649579  303486 cri.go:89] found id: ""
	I0920 19:09:04.649611  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.649619  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:04.649625  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:04.649683  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:04.684039  303486 cri.go:89] found id: ""
	I0920 19:09:04.684076  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.684094  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:04.684108  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:04.684182  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:04.729130  303486 cri.go:89] found id: ""
	I0920 19:09:04.729166  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.729177  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:04.729186  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:04.729244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:04.762646  303486 cri.go:89] found id: ""
	I0920 19:09:04.762682  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.762690  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:04.762697  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:04.762761  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:04.797492  303486 cri.go:89] found id: ""
	I0920 19:09:04.797518  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.797527  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:04.797533  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:04.797588  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:04.832780  303486 cri.go:89] found id: ""
	I0920 19:09:04.832813  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.832823  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:04.832831  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:04.832893  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:04.868489  303486 cri.go:89] found id: ""
	I0920 19:09:04.868526  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.868537  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:04.868546  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:04.868613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:04.901115  303486 cri.go:89] found id: ""
	I0920 19:09:04.901156  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.901164  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:04.901174  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:04.901186  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:04.952435  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:04.952482  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:04.966450  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:04.966481  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:05.035951  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:05.035977  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:05.035991  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:05.120961  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:05.121016  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:07.659497  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:07.672989  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:07.673062  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:07.708200  303486 cri.go:89] found id: ""
	I0920 19:09:07.708236  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.708247  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:07.708256  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:07.708320  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:07.742116  303486 cri.go:89] found id: ""
	I0920 19:09:07.742156  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.742166  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:07.742175  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:07.742231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:07.774369  303486 cri.go:89] found id: ""
	I0920 19:09:07.774401  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.774410  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:07.774419  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:07.774485  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:07.811727  303486 cri.go:89] found id: ""
	I0920 19:09:07.811756  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.811763  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:07.811769  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:07.811825  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:07.849613  303486 cri.go:89] found id: ""
	I0920 19:09:07.849646  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.849655  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:07.849661  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:07.849715  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:07.884643  303486 cri.go:89] found id: ""
	I0920 19:09:07.884679  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.884690  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:07.884698  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:07.884770  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:07.920240  303486 cri.go:89] found id: ""
	I0920 19:09:07.920272  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.920283  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:07.920292  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:07.920371  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:07.954729  303486 cri.go:89] found id: ""
	I0920 19:09:07.954768  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.954780  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:07.954792  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:07.954808  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:08.008679  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:08.008732  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:08.023637  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:08.023673  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:08.097298  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:08.097325  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:08.097340  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:08.173404  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:08.173444  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:05.783765  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.283642  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.462994  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.965062  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.106350  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.605138  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:12.605390  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.718224  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:10.732520  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:10.732593  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:10.766764  303486 cri.go:89] found id: ""
	I0920 19:09:10.766800  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.766811  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:10.766821  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:10.766887  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:10.800039  303486 cri.go:89] found id: ""
	I0920 19:09:10.800077  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.800087  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:10.800095  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:10.800157  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:10.833931  303486 cri.go:89] found id: ""
	I0920 19:09:10.833969  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.833979  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:10.833985  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:10.834057  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:10.867714  303486 cri.go:89] found id: ""
	I0920 19:09:10.867752  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.867763  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:10.867771  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:10.867840  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:10.903026  303486 cri.go:89] found id: ""
	I0920 19:09:10.903060  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.903068  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:10.903075  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:10.903131  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:10.936968  303486 cri.go:89] found id: ""
	I0920 19:09:10.937002  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.937013  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:10.937021  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:10.937089  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:10.973055  303486 cri.go:89] found id: ""
	I0920 19:09:10.973079  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.973087  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:10.973093  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:10.973145  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:11.010283  303486 cri.go:89] found id: ""
	I0920 19:09:11.010310  303486 logs.go:276] 0 containers: []
	W0920 19:09:11.010321  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:11.010333  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:11.010352  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:11.025202  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:11.025239  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:11.104268  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:11.104295  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:11.104312  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:11.182281  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:11.182326  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:11.219296  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:11.219335  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:13.767833  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:13.780805  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:13.780890  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:13.822288  303486 cri.go:89] found id: ""
	I0920 19:09:13.822317  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.822327  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:13.822334  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:13.822388  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:13.862068  303486 cri.go:89] found id: ""
	I0920 19:09:13.862098  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.862106  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:13.862112  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:13.862163  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:13.898497  303486 cri.go:89] found id: ""
	I0920 19:09:13.898529  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.898540  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:13.898550  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:13.898618  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:13.935994  303486 cri.go:89] found id: ""
	I0920 19:09:13.936022  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.936030  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:13.936038  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:13.936105  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:10.277863  302869 pod_ready.go:82] duration metric: took 4m0.000569658s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" ...
	E0920 19:09:10.277919  302869 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 19:09:10.277965  302869 pod_ready.go:39] duration metric: took 4m13.052343801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:10.278003  302869 kubeadm.go:597] duration metric: took 4m21.10965758s to restartPrimaryControlPlane
	W0920 19:09:10.278125  302869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:09:10.278168  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:09:13.462752  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:15.962371  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:14.605565  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:17.112026  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:13.973764  303486 cri.go:89] found id: ""
	I0920 19:09:13.973801  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.973812  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:13.973820  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:13.973898  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:14.009443  303486 cri.go:89] found id: ""
	I0920 19:09:14.009482  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.009494  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:14.009502  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:14.009577  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:14.045593  303486 cri.go:89] found id: ""
	I0920 19:09:14.045629  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.045639  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:14.045648  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:14.045714  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:14.086273  303486 cri.go:89] found id: ""
	I0920 19:09:14.086310  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.086319  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:14.086330  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:14.086343  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:14.140730  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:14.140772  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:14.154198  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:14.154232  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:14.224716  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:14.224739  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:14.224754  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:14.302625  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:14.302665  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:16.840816  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:16.854905  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:16.855002  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:16.892994  303486 cri.go:89] found id: ""
	I0920 19:09:16.893028  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.893038  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:16.893045  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:16.893103  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:16.931265  303486 cri.go:89] found id: ""
	I0920 19:09:16.931293  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.931307  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:16.931313  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:16.931364  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:16.970085  303486 cri.go:89] found id: ""
	I0920 19:09:16.970119  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.970129  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:16.970138  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:16.970189  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:17.003163  303486 cri.go:89] found id: ""
	I0920 19:09:17.003194  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.003206  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:17.003214  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:17.003282  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:17.040577  303486 cri.go:89] found id: ""
	I0920 19:09:17.040618  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.040633  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:17.040640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:17.040706  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:17.073946  303486 cri.go:89] found id: ""
	I0920 19:09:17.073986  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.073995  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:17.074006  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:17.074066  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:17.111569  303486 cri.go:89] found id: ""
	I0920 19:09:17.111636  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.111648  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:17.111664  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:17.111730  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:17.148005  303486 cri.go:89] found id: ""
	I0920 19:09:17.148034  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.148044  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:17.148056  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:17.148072  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:17.222281  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:17.222306  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:17.222324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:17.297577  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:17.297619  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:17.334709  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:17.334740  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:17.386279  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:17.386320  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:17.962802  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.963289  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.605813  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:22.105024  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.901017  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:19.914489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:19.914571  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:19.955023  303486 cri.go:89] found id: ""
	I0920 19:09:19.955051  303486 logs.go:276] 0 containers: []
	W0920 19:09:19.955060  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:19.955067  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:19.955125  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:19.995536  303486 cri.go:89] found id: ""
	I0920 19:09:19.995575  303486 logs.go:276] 0 containers: []
	W0920 19:09:19.995585  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:19.995594  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:19.995650  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:20.031153  303486 cri.go:89] found id: ""
	I0920 19:09:20.031181  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.031190  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:20.031198  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:20.031266  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:20.064145  303486 cri.go:89] found id: ""
	I0920 19:09:20.064174  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.064190  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:20.064199  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:20.064256  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:20.098399  303486 cri.go:89] found id: ""
	I0920 19:09:20.098429  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.098440  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:20.098449  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:20.098505  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:20.138805  303486 cri.go:89] found id: ""
	I0920 19:09:20.138833  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.138843  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:20.138852  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:20.138914  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:20.183291  303486 cri.go:89] found id: ""
	I0920 19:09:20.183322  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.183333  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:20.183342  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:20.183406  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:20.220344  303486 cri.go:89] found id: ""
	I0920 19:09:20.220378  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.220396  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:20.220409  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:20.220426  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:20.271043  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:20.271086  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:20.286724  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:20.286754  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:20.358233  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:20.358273  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:20.358291  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:20.439511  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:20.439568  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:22.982570  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:22.995384  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:22.995475  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:23.029031  303486 cri.go:89] found id: ""
	I0920 19:09:23.029069  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.029081  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:23.029091  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:23.029166  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:23.063291  303486 cri.go:89] found id: ""
	I0920 19:09:23.063325  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.063336  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:23.063343  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:23.063413  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:23.097494  303486 cri.go:89] found id: ""
	I0920 19:09:23.097525  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.097536  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:23.097545  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:23.097610  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:23.132169  303486 cri.go:89] found id: ""
	I0920 19:09:23.132197  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.132204  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:23.132211  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:23.132276  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:23.173651  303486 cri.go:89] found id: ""
	I0920 19:09:23.173682  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.173692  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:23.173700  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:23.173763  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:23.206098  303486 cri.go:89] found id: ""
	I0920 19:09:23.206135  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.206146  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:23.206155  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:23.206216  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:23.245422  303486 cri.go:89] found id: ""
	I0920 19:09:23.245466  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.245479  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:23.245489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:23.245569  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:23.280326  303486 cri.go:89] found id: ""
	I0920 19:09:23.280357  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.280365  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:23.280376  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:23.280390  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:23.330986  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:23.331034  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:23.344751  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:23.344788  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:23.420213  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:23.420239  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:23.420255  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:23.500449  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:23.500491  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:22.462590  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:24.962516  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:24.105502  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:26.110930  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:26.040050  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:26.056377  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:26.056463  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:26.094122  303486 cri.go:89] found id: ""
	I0920 19:09:26.094160  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.094170  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:26.094179  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:26.094246  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:26.129383  303486 cri.go:89] found id: ""
	I0920 19:09:26.129408  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.129415  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:26.129422  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:26.129472  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:26.163579  303486 cri.go:89] found id: ""
	I0920 19:09:26.163611  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.163621  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:26.163630  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:26.163699  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:26.208026  303486 cri.go:89] found id: ""
	I0920 19:09:26.208057  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.208065  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:26.208071  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:26.208138  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:26.245375  303486 cri.go:89] found id: ""
	I0920 19:09:26.245409  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.245421  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:26.245438  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:26.245500  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:26.280283  303486 cri.go:89] found id: ""
	I0920 19:09:26.280315  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.280326  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:26.280336  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:26.280397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:26.314621  303486 cri.go:89] found id: ""
	I0920 19:09:26.314657  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.314670  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:26.314679  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:26.314773  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:26.347667  303486 cri.go:89] found id: ""
	I0920 19:09:26.347694  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.347701  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:26.347711  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:26.347722  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:26.397221  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:26.397259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:26.411126  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:26.411157  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:26.479631  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:26.479657  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:26.479686  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:26.555439  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:26.555477  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:26.962845  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:28.963560  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:28.605949  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:30.612349  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:32.104187  303063 pod_ready.go:82] duration metric: took 4m0.005608637s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	E0920 19:09:32.104213  303063 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 19:09:32.104224  303063 pod_ready.go:39] duration metric: took 4m5.679030104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:32.104241  303063 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:09:32.104273  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:32.104327  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:32.151755  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:32.151778  303063 cri.go:89] found id: ""
	I0920 19:09:32.151787  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:32.151866  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.157358  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:32.157426  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:32.201227  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:32.201255  303063 cri.go:89] found id: ""
	I0920 19:09:32.201263  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:32.201327  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.206508  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:32.206604  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:32.243509  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:32.243533  303063 cri.go:89] found id: ""
	I0920 19:09:32.243542  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:32.243595  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.247764  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:32.247836  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:32.283590  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:32.283627  303063 cri.go:89] found id: ""
	I0920 19:09:32.283637  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:32.283727  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.287826  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:32.287893  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:32.329071  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:32.329111  303063 cri.go:89] found id: ""
	I0920 19:09:32.329123  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:32.329196  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.333152  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:32.333236  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:32.372444  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:32.372474  303063 cri.go:89] found id: ""
	I0920 19:09:32.372485  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:32.372548  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.376414  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:32.376494  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:32.412244  303063 cri.go:89] found id: ""
	I0920 19:09:32.412280  303063 logs.go:276] 0 containers: []
	W0920 19:09:32.412291  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:32.412299  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:32.412352  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:32.449451  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:32.449472  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:32.449477  303063 cri.go:89] found id: ""
	I0920 19:09:32.449485  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:32.449544  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.454960  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.459688  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:32.459720  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:32.599208  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:32.599241  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:32.656960  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:32.657000  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:32.703259  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:32.703308  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:32.769218  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:32.769260  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:29.096877  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:29.110081  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:29.110170  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:29.152570  303486 cri.go:89] found id: ""
	I0920 19:09:29.152598  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.152608  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:29.152616  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:29.152689  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:29.188596  303486 cri.go:89] found id: ""
	I0920 19:09:29.188627  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.188638  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:29.188645  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:29.188713  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:29.228789  303486 cri.go:89] found id: ""
	I0920 19:09:29.228831  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.228841  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:29.228850  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:29.228913  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:29.260013  303486 cri.go:89] found id: ""
	I0920 19:09:29.260040  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.260048  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:29.260054  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:29.260105  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:29.293373  303486 cri.go:89] found id: ""
	I0920 19:09:29.293401  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.293411  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:29.293418  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:29.293487  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:29.325860  303486 cri.go:89] found id: ""
	I0920 19:09:29.325898  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.325925  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:29.325935  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:29.326027  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:29.358873  303486 cri.go:89] found id: ""
	I0920 19:09:29.358909  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.358921  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:29.358930  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:29.358994  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:29.392029  303486 cri.go:89] found id: ""
	I0920 19:09:29.392057  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.392067  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:29.392080  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:29.392095  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:29.467460  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:29.467504  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:29.508258  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:29.508298  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:29.559238  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:29.559274  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:29.574233  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:29.574264  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:29.649318  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:32.150539  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:32.168442  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:32.168527  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:32.210069  303486 cri.go:89] found id: ""
	I0920 19:09:32.210103  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.210120  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:32.210129  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:32.210191  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:32.243468  303486 cri.go:89] found id: ""
	I0920 19:09:32.243501  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.243511  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:32.243519  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:32.243586  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:32.275958  303486 cri.go:89] found id: ""
	I0920 19:09:32.275988  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.275996  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:32.276003  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:32.276056  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:32.312560  303486 cri.go:89] found id: ""
	I0920 19:09:32.312598  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.312609  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:32.312620  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:32.312695  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:32.347157  303486 cri.go:89] found id: ""
	I0920 19:09:32.347185  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.347193  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:32.347200  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:32.347264  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:32.382787  303486 cri.go:89] found id: ""
	I0920 19:09:32.382820  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.382832  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:32.382841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:32.382898  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:32.416182  303486 cri.go:89] found id: ""
	I0920 19:09:32.416216  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.416226  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:32.416234  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:32.416297  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:32.448863  303486 cri.go:89] found id: ""
	I0920 19:09:32.448895  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.448906  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:32.448919  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:32.448934  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:32.501882  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:32.501934  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:32.517984  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:32.518014  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:32.588517  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:32.588547  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:32.588560  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:32.671869  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:32.671921  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:35.211780  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:35.225476  303486 kubeadm.go:597] duration metric: took 4m2.827297435s to restartPrimaryControlPlane
	W0920 19:09:35.225582  303486 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:09:35.225618  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:09:35.686956  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:35.701803  303486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:09:35.712572  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:09:35.722867  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:09:35.722894  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:09:35.722948  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:09:35.732295  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:09:35.732358  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:09:35.741569  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:09:35.750515  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:09:35.750577  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:09:35.760469  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:09:35.770207  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:09:35.770284  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:09:35.780121  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:09:35.789887  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:09:35.789974  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:09:35.800914  303486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:09:35.871635  303486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:09:35.871691  303486 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:09:36.021411  303486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:09:36.021565  303486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:09:36.021773  303486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:09:36.217540  303486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:09:31.462557  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:33.463284  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:35.964501  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:36.723149  302869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.444941441s)
	I0920 19:09:36.723244  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:36.740763  302869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:09:36.751727  302869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:09:36.762710  302869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:09:36.762736  302869 kubeadm.go:157] found existing configuration files:
	
	I0920 19:09:36.762793  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:09:36.773454  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:09:36.773536  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:09:36.784738  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:09:36.794740  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:09:36.794818  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:09:36.805727  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:09:36.818253  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:09:36.818329  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:09:36.831210  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:09:36.842838  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:09:36.842914  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:09:36.853306  302869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:09:36.903121  302869 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:09:36.903285  302869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:09:37.025789  302869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:09:37.025969  302869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:09:37.026110  302869 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:09:37.034613  302869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:09:36.219542  303486 out.go:235]   - Generating certificates and keys ...
	I0920 19:09:36.219684  303486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:09:36.219769  303486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:09:36.219892  303486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:09:36.219973  303486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:09:36.220090  303486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:09:36.220181  303486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:09:36.220302  303486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:09:36.220414  303486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:09:36.220530  303486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:09:36.220626  303486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:09:36.220691  303486 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:09:36.220767  303486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:09:36.377012  303486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:09:36.706154  303486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:09:36.907341  303486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:09:37.091990  303486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:09:37.122813  303486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:09:37.124422  303486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:09:37.124531  303486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:09:37.277461  303486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:09:33.294289  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:33.294346  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:33.362317  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:33.362364  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:33.375712  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:33.375747  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:33.411136  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:33.411168  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:33.445649  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:33.445690  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:33.478869  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:33.478898  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:33.529433  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:33.529480  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:33.570515  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:33.570560  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.107490  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:36.124979  303063 api_server.go:72] duration metric: took 4m17.429642296s to wait for apiserver process to appear ...
	I0920 19:09:36.125014  303063 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:09:36.125069  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:36.125145  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:36.181962  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:36.181990  303063 cri.go:89] found id: ""
	I0920 19:09:36.182001  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:36.182061  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.186792  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:36.186876  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:36.235963  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:36.235993  303063 cri.go:89] found id: ""
	I0920 19:09:36.236003  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:36.236066  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.241177  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:36.241321  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:36.288324  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.288353  303063 cri.go:89] found id: ""
	I0920 19:09:36.288361  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:36.288415  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.293328  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:36.293413  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:36.335126  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:36.335153  303063 cri.go:89] found id: ""
	I0920 19:09:36.335163  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:36.335226  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.339400  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:36.339470  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:36.375555  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:36.375582  303063 cri.go:89] found id: ""
	I0920 19:09:36.375592  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:36.375657  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.379679  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:36.379753  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:36.415398  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:36.415424  303063 cri.go:89] found id: ""
	I0920 19:09:36.415434  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:36.415495  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.420183  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:36.420260  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:36.462018  303063 cri.go:89] found id: ""
	I0920 19:09:36.462049  303063 logs.go:276] 0 containers: []
	W0920 19:09:36.462060  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:36.462068  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:36.462129  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:36.515520  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:36.515551  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:36.515557  303063 cri.go:89] found id: ""
	I0920 19:09:36.515567  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:36.515628  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.520140  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.524197  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:36.524222  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:36.589535  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:36.589570  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.628836  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:36.628865  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:36.667614  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:36.667654  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:37.164164  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:37.164222  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:37.253505  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:37.253550  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:37.272704  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:37.272742  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:37.315827  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:37.315869  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:37.360449  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:37.360479  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:37.428225  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:37.428270  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:37.469766  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:37.469795  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:37.524517  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:37.524553  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:37.652128  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:37.652162  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:37.036846  302869 out.go:235]   - Generating certificates and keys ...
	I0920 19:09:37.036956  302869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:09:37.037061  302869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:09:37.037194  302869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:09:37.037284  302869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:09:37.037386  302869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:09:37.037462  302869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:09:37.037546  302869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:09:37.037635  302869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:09:37.037734  302869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:09:37.037847  302869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:09:37.037918  302869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:09:37.037995  302869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:09:37.116270  302869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:09:37.615537  302869 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:09:37.907479  302869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:09:38.090167  302869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:09:38.209430  302869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:09:38.209780  302869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:09:38.212626  302869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:09:37.279714  303486 out.go:235]   - Booting up control plane ...
	I0920 19:09:37.279861  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:09:37.288448  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:09:37.289724  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:09:37.290822  303486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:09:37.294106  303486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:09:38.214873  302869 out.go:235]   - Booting up control plane ...
	I0920 19:09:38.214994  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:09:38.215102  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:09:38.215199  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:09:38.232798  302869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:09:38.238716  302869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:09:38.238784  302869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:09:38.370841  302869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:09:38.371037  302869 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:09:38.463252  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:40.463322  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:40.212781  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:09:40.217868  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 200:
	ok
	I0920 19:09:40.219021  303063 api_server.go:141] control plane version: v1.31.1
	I0920 19:09:40.219044  303063 api_server.go:131] duration metric: took 4.094023157s to wait for apiserver health ...
	I0920 19:09:40.219053  303063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:09:40.219077  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:40.219128  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:40.264337  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:40.264365  303063 cri.go:89] found id: ""
	I0920 19:09:40.264376  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:40.264434  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.270143  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:40.270222  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:40.321696  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:40.321723  303063 cri.go:89] found id: ""
	I0920 19:09:40.321733  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:40.321799  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.329068  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:40.329149  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:40.387241  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:40.387329  303063 cri.go:89] found id: ""
	I0920 19:09:40.387357  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:40.387427  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.392896  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:40.392975  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:40.429173  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:40.429200  303063 cri.go:89] found id: ""
	I0920 19:09:40.429210  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:40.429284  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.434102  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:40.434179  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:40.480569  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:40.480598  303063 cri.go:89] found id: ""
	I0920 19:09:40.480607  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:40.480669  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.485821  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:40.485935  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:40.531502  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:40.531543  303063 cri.go:89] found id: ""
	I0920 19:09:40.531554  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:40.531613  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.535699  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:40.535769  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:40.569788  303063 cri.go:89] found id: ""
	I0920 19:09:40.569823  303063 logs.go:276] 0 containers: []
	W0920 19:09:40.569835  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:40.569842  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:40.569928  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:40.604668  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:40.604703  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:40.604710  303063 cri.go:89] found id: ""
	I0920 19:09:40.604721  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:40.604790  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.608948  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.613331  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:40.613360  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:40.657680  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:40.657726  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:40.698087  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:40.698125  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:40.753643  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:40.753683  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:40.791741  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:40.791790  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:41.176451  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:41.176497  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:41.226352  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:41.226386  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:41.307652  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:41.307694  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:41.323271  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:41.323307  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:41.441151  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:41.441195  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:41.495438  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:41.495494  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:41.543879  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:41.543930  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:41.595010  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:41.595055  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:44.140048  303063 system_pods.go:59] 8 kube-system pods found
	I0920 19:09:44.140078  303063 system_pods.go:61] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running
	I0920 19:09:44.140083  303063 system_pods.go:61] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running
	I0920 19:09:44.140087  303063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running
	I0920 19:09:44.140091  303063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running
	I0920 19:09:44.140094  303063 system_pods.go:61] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running
	I0920 19:09:44.140097  303063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running
	I0920 19:09:44.140104  303063 system_pods.go:61] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:44.140108  303063 system_pods.go:61] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running
	I0920 19:09:44.140115  303063 system_pods.go:74] duration metric: took 3.921056539s to wait for pod list to return data ...
	I0920 19:09:44.140122  303063 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:09:44.143381  303063 default_sa.go:45] found service account: "default"
	I0920 19:09:44.143409  303063 default_sa.go:55] duration metric: took 3.281031ms for default service account to be created ...
	I0920 19:09:44.143422  303063 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:09:44.148161  303063 system_pods.go:86] 8 kube-system pods found
	I0920 19:09:44.148191  303063 system_pods.go:89] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running
	I0920 19:09:44.148199  303063 system_pods.go:89] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running
	I0920 19:09:44.148205  303063 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running
	I0920 19:09:44.148212  303063 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running
	I0920 19:09:44.148216  303063 system_pods.go:89] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running
	I0920 19:09:44.148221  303063 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running
	I0920 19:09:44.148230  303063 system_pods.go:89] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:44.148236  303063 system_pods.go:89] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running
	I0920 19:09:44.148248  303063 system_pods.go:126] duration metric: took 4.819429ms to wait for k8s-apps to be running ...
	I0920 19:09:44.148260  303063 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:09:44.148312  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:44.163839  303063 system_svc.go:56] duration metric: took 15.568956ms WaitForService to wait for kubelet
	I0920 19:09:44.163882  303063 kubeadm.go:582] duration metric: took 4m25.468555427s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:09:44.163911  303063 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:09:44.167622  303063 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:09:44.167656  303063 node_conditions.go:123] node cpu capacity is 2
	I0920 19:09:44.167671  303063 node_conditions.go:105] duration metric: took 3.752828ms to run NodePressure ...
	I0920 19:09:44.167690  303063 start.go:241] waiting for startup goroutines ...
	I0920 19:09:44.167700  303063 start.go:246] waiting for cluster config update ...
	I0920 19:09:44.167716  303063 start.go:255] writing updated cluster config ...
	I0920 19:09:44.168208  303063 ssh_runner.go:195] Run: rm -f paused
	I0920 19:09:44.223860  303063 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:09:44.226056  303063 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-612312" cluster and "default" namespace by default
	I0920 19:09:39.373109  302869 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002236347s
	I0920 19:09:39.373229  302869 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:09:44.375102  302869 kubeadm.go:310] [api-check] The API server is healthy after 5.001998039s
	I0920 19:09:44.405405  302869 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:09:44.428364  302869 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:09:44.470575  302869 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:09:44.470870  302869 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-339897 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:09:44.505469  302869 kubeadm.go:310] [bootstrap-token] Using token: v5zzut.gmtb3j9b0yqqwvtv
	I0920 19:09:44.507561  302869 out.go:235]   - Configuring RBAC rules ...
	I0920 19:09:44.507721  302869 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:09:44.522092  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:09:44.555238  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:09:44.559971  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:09:44.566954  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:09:44.574111  302869 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:09:44.788900  302869 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:09:45.229897  302869 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:09:45.788397  302869 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:09:45.789415  302869 kubeadm.go:310] 
	I0920 19:09:45.789504  302869 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:09:45.789516  302869 kubeadm.go:310] 
	I0920 19:09:45.789614  302869 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:09:45.789631  302869 kubeadm.go:310] 
	I0920 19:09:45.789664  302869 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:09:45.789804  302869 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:09:45.789897  302869 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:09:45.789930  302869 kubeadm.go:310] 
	I0920 19:09:45.790043  302869 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:09:45.790061  302869 kubeadm.go:310] 
	I0920 19:09:45.790130  302869 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:09:45.790145  302869 kubeadm.go:310] 
	I0920 19:09:45.790203  302869 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:09:45.790269  302869 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:09:45.790330  302869 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:09:45.790337  302869 kubeadm.go:310] 
	I0920 19:09:45.790438  302869 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:09:45.790549  302869 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:09:45.790563  302869 kubeadm.go:310] 
	I0920 19:09:45.790664  302869 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v5zzut.gmtb3j9b0yqqwvtv \
	I0920 19:09:45.790792  302869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 19:09:45.790823  302869 kubeadm.go:310] 	--control-plane 
	I0920 19:09:45.790835  302869 kubeadm.go:310] 
	I0920 19:09:45.790962  302869 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:09:45.790977  302869 kubeadm.go:310] 
	I0920 19:09:45.791045  302869 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v5zzut.gmtb3j9b0yqqwvtv \
	I0920 19:09:45.791164  302869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 19:09:45.792825  302869 kubeadm.go:310] W0920 19:09:36.880654    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:09:45.793122  302869 kubeadm.go:310] W0920 19:09:36.881516    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:09:45.793273  302869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:09:45.793317  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:09:45.793331  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:09:45.795282  302869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:09:42.464639  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:44.464714  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:45.796961  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:09:45.808972  302869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:09:45.831122  302869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:09:45.831174  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:45.831208  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-339897 minikube.k8s.io/updated_at=2024_09_20T19_09_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=embed-certs-339897 minikube.k8s.io/primary=true
	I0920 19:09:46.057677  302869 ops.go:34] apiserver oom_adj: -16
	I0920 19:09:46.057798  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:46.558670  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:47.057876  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:47.558913  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:48.057925  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:48.557985  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:49.057925  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:49.558500  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:50.058507  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:50.198032  302869 kubeadm.go:1113] duration metric: took 4.366908909s to wait for elevateKubeSystemPrivileges
	I0920 19:09:50.198074  302869 kubeadm.go:394] duration metric: took 5m1.087269263s to StartCluster
	I0920 19:09:50.198100  302869 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:09:50.198209  302869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:09:50.200736  302869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:09:50.201068  302869 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:09:50.201327  302869 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:09:50.201393  302869 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:09:50.201482  302869 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-339897"
	I0920 19:09:50.201502  302869 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-339897"
	W0920 19:09:50.201512  302869 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:09:50.201542  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.202007  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202050  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.202261  302869 addons.go:69] Setting default-storageclass=true in profile "embed-certs-339897"
	I0920 19:09:50.202285  302869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-339897"
	I0920 19:09:50.202285  302869 addons.go:69] Setting metrics-server=true in profile "embed-certs-339897"
	I0920 19:09:50.202311  302869 addons.go:234] Setting addon metrics-server=true in "embed-certs-339897"
	W0920 19:09:50.202319  302869 addons.go:243] addon metrics-server should already be in state true
	I0920 19:09:50.202349  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.202688  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202752  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.202755  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202793  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.203329  302869 out.go:177] * Verifying Kubernetes components...
	I0920 19:09:50.204655  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:09:50.224081  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46289
	I0920 19:09:50.224334  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45801
	I0920 19:09:50.224337  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0920 19:09:50.224579  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.224941  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.225039  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.225214  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225231  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.225643  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.225682  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225699  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.225798  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225818  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.226018  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.226080  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.226564  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.226594  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.226777  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.227444  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.227494  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.229747  302869 addons.go:234] Setting addon default-storageclass=true in "embed-certs-339897"
	W0920 19:09:50.229771  302869 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:09:50.229803  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.230208  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.230261  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.243865  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I0920 19:09:50.244292  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.244828  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.244851  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.245080  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0920 19:09:50.245252  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.245714  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.245810  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.246303  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.246323  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.246661  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.246806  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.248050  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.248671  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.250223  302869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:09:50.250319  302869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:09:46.963562  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:48.965266  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:50.250485  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38237
	I0920 19:09:50.250954  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.251418  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.251435  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.251535  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:09:50.251556  302869 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:09:50.251594  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.251680  302869 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:09:50.251693  302869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:09:50.251706  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.251889  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.252452  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.252502  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.255422  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.255692  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.255902  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.255928  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.256372  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.256396  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.256442  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.256663  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.256697  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.256840  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.256868  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.257066  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.257089  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.257268  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.272424  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I0920 19:09:50.273107  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.273729  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.273746  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.274208  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.274402  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.276189  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.276384  302869 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:09:50.276399  302869 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:09:50.276417  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.279319  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.279718  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.279747  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.279850  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.280044  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.280305  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.280481  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.407262  302869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:09:50.455491  302869 node_ready.go:35] waiting up to 6m0s for node "embed-certs-339897" to be "Ready" ...
	I0920 19:09:50.503634  302869 node_ready.go:49] node "embed-certs-339897" has status "Ready":"True"
	I0920 19:09:50.503663  302869 node_ready.go:38] duration metric: took 48.13478ms for node "embed-certs-339897" to be "Ready" ...
	I0920 19:09:50.503672  302869 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:50.532327  302869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:50.589446  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:09:50.589482  302869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:09:50.613277  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:09:50.619161  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:09:50.662197  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:09:50.662232  302869 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:09:50.753073  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:09:50.753106  302869 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:09:50.842679  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:09:51.790932  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171721983s)
	I0920 19:09:51.790997  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791012  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791029  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.177708427s)
	I0920 19:09:51.791073  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791089  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791380  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.791438  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.791444  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.791483  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791380  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.791527  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.791541  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791556  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791416  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.791493  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.793128  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.793159  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.793177  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.793149  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.793148  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.793208  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.820906  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.820939  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.821290  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.821312  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.003182  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.160452395s)
	I0920 19:09:52.003247  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:52.003261  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:52.003593  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:52.003600  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:52.003622  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.003632  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:52.003640  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:52.003985  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:52.004003  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.004017  302869 addons.go:475] Verifying addon metrics-server=true in "embed-certs-339897"
	I0920 19:09:52.006444  302869 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 19:09:52.008313  302869 addons.go:510] duration metric: took 1.806914162s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 19:09:52.539578  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:53.539999  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:53.540026  302869 pod_ready.go:82] duration metric: took 3.007669334s for pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:53.540036  302869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:51.463340  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:53.963461  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:55.547997  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:57.552686  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.552714  302869 pod_ready.go:82] duration metric: took 4.01267227s for pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.552724  302869 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.560885  302869 pod_ready.go:93] pod "etcd-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.560910  302869 pod_ready.go:82] duration metric: took 8.179457ms for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.560919  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.577414  302869 pod_ready.go:93] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.577441  302869 pod_ready.go:82] duration metric: took 16.515029ms for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.577451  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.588547  302869 pod_ready.go:93] pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.588574  302869 pod_ready.go:82] duration metric: took 11.116334ms for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.588583  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-whcbh" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.594919  302869 pod_ready.go:93] pod "kube-proxy-whcbh" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.594942  302869 pod_ready.go:82] duration metric: took 6.35266ms for pod "kube-proxy-whcbh" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.594951  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.943559  302869 pod_ready.go:93] pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.943585  302869 pod_ready.go:82] duration metric: took 348.626555ms for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.943592  302869 pod_ready.go:39] duration metric: took 7.439908161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:57.943609  302869 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:09:57.943662  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:57.959537  302869 api_server.go:72] duration metric: took 7.758426976s to wait for apiserver process to appear ...
	I0920 19:09:57.959567  302869 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:09:57.959594  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:09:57.964316  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 200:
	ok
	I0920 19:09:57.965668  302869 api_server.go:141] control plane version: v1.31.1
	I0920 19:09:57.965690  302869 api_server.go:131] duration metric: took 6.115168ms to wait for apiserver health ...
	I0920 19:09:57.965697  302869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:09:58.148306  302869 system_pods.go:59] 9 kube-system pods found
	I0920 19:09:58.148339  302869 system_pods.go:61] "coredns-7c65d6cfc9-2zlww" [5eb78763-7160-4ae9-80c3-87a82a6dc992] Running
	I0920 19:09:58.148345  302869 system_pods.go:61] "coredns-7c65d6cfc9-7fxdr" [85a441e8-39b0-4623-a7bd-eebbd1574f20] Running
	I0920 19:09:58.148349  302869 system_pods.go:61] "etcd-embed-certs-339897" [150a2276-3896-498e-89f7-44cf4554da69] Running
	I0920 19:09:58.148352  302869 system_pods.go:61] "kube-apiserver-embed-certs-339897" [396520a3-2567-4267-852d-9f9525dd5e01] Running
	I0920 19:09:58.148356  302869 system_pods.go:61] "kube-controller-manager-embed-certs-339897" [7f64ad97-3230-4cf5-92ad-cf58ef88a2b0] Running
	I0920 19:09:58.148359  302869 system_pods.go:61] "kube-proxy-whcbh" [3a2dbb60-1a51-4874-98b8-75d1a35b0512] Running
	I0920 19:09:58.148361  302869 system_pods.go:61] "kube-scheduler-embed-certs-339897" [31214783-f8cf-46c6-a305-fde7692dfc72] Running
	I0920 19:09:58.148367  302869 system_pods.go:61] "metrics-server-6867b74b74-tw9fh" [8366591d-8916-4b9f-be8a-64ddc185f576] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:58.148371  302869 system_pods.go:61] "storage-provisioner" [8bcc482a-6905-436a-8d90-7eee9ba18f8b] Running
	I0920 19:09:58.148381  302869 system_pods.go:74] duration metric: took 182.677921ms to wait for pod list to return data ...
	I0920 19:09:58.148387  302869 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:09:58.344318  302869 default_sa.go:45] found service account: "default"
	I0920 19:09:58.344346  302869 default_sa.go:55] duration metric: took 195.952788ms for default service account to be created ...
	I0920 19:09:58.344357  302869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:09:58.547996  302869 system_pods.go:86] 9 kube-system pods found
	I0920 19:09:58.548034  302869 system_pods.go:89] "coredns-7c65d6cfc9-2zlww" [5eb78763-7160-4ae9-80c3-87a82a6dc992] Running
	I0920 19:09:58.548043  302869 system_pods.go:89] "coredns-7c65d6cfc9-7fxdr" [85a441e8-39b0-4623-a7bd-eebbd1574f20] Running
	I0920 19:09:58.548048  302869 system_pods.go:89] "etcd-embed-certs-339897" [150a2276-3896-498e-89f7-44cf4554da69] Running
	I0920 19:09:58.548054  302869 system_pods.go:89] "kube-apiserver-embed-certs-339897" [396520a3-2567-4267-852d-9f9525dd5e01] Running
	I0920 19:09:58.548060  302869 system_pods.go:89] "kube-controller-manager-embed-certs-339897" [7f64ad97-3230-4cf5-92ad-cf58ef88a2b0] Running
	I0920 19:09:58.548066  302869 system_pods.go:89] "kube-proxy-whcbh" [3a2dbb60-1a51-4874-98b8-75d1a35b0512] Running
	I0920 19:09:58.548070  302869 system_pods.go:89] "kube-scheduler-embed-certs-339897" [31214783-f8cf-46c6-a305-fde7692dfc72] Running
	I0920 19:09:58.548079  302869 system_pods.go:89] "metrics-server-6867b74b74-tw9fh" [8366591d-8916-4b9f-be8a-64ddc185f576] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:58.548085  302869 system_pods.go:89] "storage-provisioner" [8bcc482a-6905-436a-8d90-7eee9ba18f8b] Running
	I0920 19:09:58.548099  302869 system_pods.go:126] duration metric: took 203.735171ms to wait for k8s-apps to be running ...
	I0920 19:09:58.548108  302869 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:09:58.548165  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:58.563235  302869 system_svc.go:56] duration metric: took 15.107997ms WaitForService to wait for kubelet
	I0920 19:09:58.563274  302869 kubeadm.go:582] duration metric: took 8.362165276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:09:58.563299  302869 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:09:58.744093  302869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:09:58.744155  302869 node_conditions.go:123] node cpu capacity is 2
	I0920 19:09:58.744171  302869 node_conditions.go:105] duration metric: took 180.864643ms to run NodePressure ...
	I0920 19:09:58.744186  302869 start.go:241] waiting for startup goroutines ...
	I0920 19:09:58.744196  302869 start.go:246] waiting for cluster config update ...
	I0920 19:09:58.744220  302869 start.go:255] writing updated cluster config ...
	I0920 19:09:58.744526  302869 ssh_runner.go:195] Run: rm -f paused
	I0920 19:09:58.794946  302869 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:09:58.797418  302869 out.go:177] * Done! kubectl is now configured to use "embed-certs-339897" cluster and "default" namespace by default
	I0920 19:09:56.464024  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:58.464282  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:00.963419  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:02.963506  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:04.963804  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:07.463546  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:09.962855  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:11.963447  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:13.964915  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:17.296411  303486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:10:17.296525  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:17.296765  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:16.462968  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:18.963906  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:22.297630  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:22.297923  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:21.463201  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:22.457112  302538 pod_ready.go:82] duration metric: took 4m0.000881628s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" ...
	E0920 19:10:22.457161  302538 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 19:10:22.457180  302538 pod_ready.go:39] duration metric: took 4m14.047738931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:10:22.457208  302538 kubeadm.go:597] duration metric: took 4m21.028566787s to restartPrimaryControlPlane
	W0920 19:10:22.457265  302538 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:10:22.457291  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:10:32.298239  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:32.298525  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:48.632052  302538 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.17473972s)
	I0920 19:10:48.632143  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:10:48.648205  302538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:10:48.658969  302538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:10:48.668954  302538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:10:48.668981  302538 kubeadm.go:157] found existing configuration files:
	
	I0920 19:10:48.669035  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:10:48.678138  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:10:48.678229  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:10:48.687960  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:10:48.697578  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:10:48.697644  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:10:48.707573  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:10:48.717059  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:10:48.717123  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:10:48.727642  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:10:48.737599  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:10:48.737681  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:10:48.749542  302538 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:10:48.795278  302538 kubeadm.go:310] W0920 19:10:48.780113    2961 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:10:48.796096  302538 kubeadm.go:310] W0920 19:10:48.780928    2961 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:10:48.910958  302538 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:10:52.299257  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:52.299561  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:56.716717  302538 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:10:56.716805  302538 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:10:56.716938  302538 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:10:56.717078  302538 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:10:56.717170  302538 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:10:56.717225  302538 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:10:56.719086  302538 out.go:235]   - Generating certificates and keys ...
	I0920 19:10:56.719199  302538 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:10:56.719286  302538 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:10:56.719407  302538 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:10:56.719505  302538 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:10:56.719624  302538 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:10:56.719720  302538 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:10:56.719811  302538 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:10:56.719928  302538 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:10:56.720049  302538 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:10:56.720154  302538 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:10:56.720224  302538 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:10:56.720287  302538 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:10:56.720334  302538 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:10:56.720386  302538 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:10:56.720432  302538 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:10:56.720486  302538 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:10:56.720533  302538 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:10:56.720606  302538 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:10:56.720701  302538 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:10:56.722504  302538 out.go:235]   - Booting up control plane ...
	I0920 19:10:56.722620  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:10:56.722748  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:10:56.722872  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:10:56.723020  302538 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:10:56.723105  302538 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:10:56.723148  302538 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:10:56.723337  302538 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:10:56.723455  302538 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:10:56.723515  302538 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.448196ms
	I0920 19:10:56.723612  302538 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:10:56.723706  302538 kubeadm.go:310] [api-check] The API server is healthy after 5.001495273s
	I0920 19:10:56.723888  302538 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:10:56.724046  302538 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:10:56.724131  302538 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:10:56.724406  302538 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-037711 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:10:56.724464  302538 kubeadm.go:310] [bootstrap-token] Using token: 2hi1gl.ipidz4nvj8gip8th
	I0920 19:10:56.726099  302538 out.go:235]   - Configuring RBAC rules ...
	I0920 19:10:56.726212  302538 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:10:56.726315  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:10:56.726479  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:10:56.726641  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:10:56.726794  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:10:56.726926  302538 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:10:56.727082  302538 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:10:56.727154  302538 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:10:56.727202  302538 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:10:56.727209  302538 kubeadm.go:310] 
	I0920 19:10:56.727261  302538 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:10:56.727267  302538 kubeadm.go:310] 
	I0920 19:10:56.727363  302538 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:10:56.727383  302538 kubeadm.go:310] 
	I0920 19:10:56.727424  302538 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:10:56.727507  302538 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:10:56.727607  302538 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:10:56.727620  302538 kubeadm.go:310] 
	I0920 19:10:56.727699  302538 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:10:56.727712  302538 kubeadm.go:310] 
	I0920 19:10:56.727775  302538 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:10:56.727790  302538 kubeadm.go:310] 
	I0920 19:10:56.727865  302538 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:10:56.727969  302538 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:10:56.728032  302538 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:10:56.728038  302538 kubeadm.go:310] 
	I0920 19:10:56.728106  302538 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:10:56.728171  302538 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:10:56.728177  302538 kubeadm.go:310] 
	I0920 19:10:56.728271  302538 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2hi1gl.ipidz4nvj8gip8th \
	I0920 19:10:56.728406  302538 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 19:10:56.728438  302538 kubeadm.go:310] 	--control-plane 
	I0920 19:10:56.728451  302538 kubeadm.go:310] 
	I0920 19:10:56.728571  302538 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:10:56.728577  302538 kubeadm.go:310] 
	I0920 19:10:56.728675  302538 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2hi1gl.ipidz4nvj8gip8th \
	I0920 19:10:56.728823  302538 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 19:10:56.728837  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:10:56.728843  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:10:56.730851  302538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:10:56.732462  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:10:56.745326  302538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:10:56.764458  302538 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:10:56.764563  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:56.764620  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-037711 minikube.k8s.io/updated_at=2024_09_20T19_10_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=no-preload-037711 minikube.k8s.io/primary=true
	I0920 19:10:56.792026  302538 ops.go:34] apiserver oom_adj: -16
	I0920 19:10:56.976178  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:57.477172  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:57.977076  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:58.476357  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:58.977162  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:59.476924  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:59.976506  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:11:00.080925  302538 kubeadm.go:1113] duration metric: took 3.316440483s to wait for elevateKubeSystemPrivileges
	I0920 19:11:00.080968  302538 kubeadm.go:394] duration metric: took 4m58.701872852s to StartCluster
	I0920 19:11:00.080994  302538 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:11:00.081106  302538 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:11:00.082815  302538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:11:00.083064  302538 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:11:00.083137  302538 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:11:00.083243  302538 addons.go:69] Setting storage-provisioner=true in profile "no-preload-037711"
	I0920 19:11:00.083263  302538 addons.go:234] Setting addon storage-provisioner=true in "no-preload-037711"
	W0920 19:11:00.083272  302538 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:11:00.083263  302538 addons.go:69] Setting default-storageclass=true in profile "no-preload-037711"
	I0920 19:11:00.083299  302538 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-037711"
	I0920 19:11:00.083308  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.083304  302538 addons.go:69] Setting metrics-server=true in profile "no-preload-037711"
	I0920 19:11:00.083342  302538 addons.go:234] Setting addon metrics-server=true in "no-preload-037711"
	W0920 19:11:00.083354  302538 addons.go:243] addon metrics-server should already be in state true
	I0920 19:11:00.083385  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.083315  302538 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:11:00.083667  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083709  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083715  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.083750  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.083864  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083912  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.084969  302538 out.go:177] * Verifying Kubernetes components...
	I0920 19:11:00.086652  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:11:00.102128  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0920 19:11:00.102362  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33853
	I0920 19:11:00.102750  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0920 19:11:00.102879  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103041  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103431  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103635  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.103651  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.103767  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.103783  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.104022  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.104040  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.104042  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104180  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104383  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104394  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.104842  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.104881  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.104927  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.104963  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.107816  302538 addons.go:234] Setting addon default-storageclass=true in "no-preload-037711"
	W0920 19:11:00.107836  302538 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:11:00.107865  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.108193  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.108236  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.121661  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0920 19:11:00.122693  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.123520  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.123642  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.124299  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.124530  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.125624  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I0920 19:11:00.126343  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.126439  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I0920 19:11:00.126868  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.126947  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.127277  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.127302  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.127572  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.127599  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.127646  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.127902  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.128095  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.128318  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.128360  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.129099  302538 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:11:00.129788  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.130688  302538 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:11:00.130713  302538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:11:00.130732  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.131393  302538 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:11:00.132404  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:11:00.132432  302538 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:11:00.132454  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.134112  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.134627  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.134690  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.135041  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.135215  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.135448  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.135550  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.136315  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.136816  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.136849  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.137011  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.137231  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.137409  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.137589  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.166369  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0920 19:11:00.166884  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.167464  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.167483  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.167850  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.168037  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.169668  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.169875  302538 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:11:00.169891  302538 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:11:00.169925  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.172907  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.173383  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.173416  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.173577  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.173820  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.174010  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.174212  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.275468  302538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:11:00.290839  302538 node_ready.go:35] waiting up to 6m0s for node "no-preload-037711" to be "Ready" ...
	I0920 19:11:00.300222  302538 node_ready.go:49] node "no-preload-037711" has status "Ready":"True"
	I0920 19:11:00.300244  302538 node_ready.go:38] duration metric: took 9.368069ms for node "no-preload-037711" to be "Ready" ...
	I0920 19:11:00.300253  302538 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:11:00.306099  302538 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:00.364927  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:11:00.364956  302538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:11:00.382910  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:11:00.392581  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:11:00.392611  302538 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:11:00.404275  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:11:00.442677  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:11:00.442707  302538 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:11:00.500976  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:11:01.337157  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337196  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337169  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337265  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337558  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.337573  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.337600  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337613  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.337641  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337649  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337685  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337702  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.337711  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337720  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337961  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337978  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.338064  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.338114  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.338133  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.395956  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.395989  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.396327  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.396355  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580133  302538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.079115769s)
	I0920 19:11:01.580188  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.580203  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.580548  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.580568  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580578  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.580586  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.580817  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.580842  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580853  302538 addons.go:475] Verifying addon metrics-server=true in "no-preload-037711"
	I0920 19:11:01.582786  302538 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 19:11:01.584283  302538 addons.go:510] duration metric: took 1.501156808s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 19:11:02.314471  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:04.817174  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:07.312399  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:07.812969  302538 pod_ready.go:93] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:07.812999  302538 pod_ready.go:82] duration metric: took 7.506877081s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.813008  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.818172  302538 pod_ready.go:93] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:07.818200  302538 pod_ready.go:82] duration metric: took 5.184579ms for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.818211  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:09.825772  302538 pod_ready.go:103] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:10.325453  302538 pod_ready.go:93] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:10.325479  302538 pod_ready.go:82] duration metric: took 2.507262085s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.325489  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.331181  302538 pod_ready.go:93] pod "kube-scheduler-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:10.331208  302538 pod_ready.go:82] duration metric: took 5.711573ms for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.331216  302538 pod_ready.go:39] duration metric: took 10.030954081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:11:10.331233  302538 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:11:10.331286  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:11:10.348104  302538 api_server.go:72] duration metric: took 10.265008499s to wait for apiserver process to appear ...
	I0920 19:11:10.348135  302538 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:11:10.348157  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:11:10.352242  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0920 19:11:10.353228  302538 api_server.go:141] control plane version: v1.31.1
	I0920 19:11:10.353249  302538 api_server.go:131] duration metric: took 5.107446ms to wait for apiserver health ...
	I0920 19:11:10.353257  302538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:11:10.358560  302538 system_pods.go:59] 9 kube-system pods found
	I0920 19:11:10.358588  302538 system_pods.go:61] "coredns-7c65d6cfc9-gdfh9" [61c6d6d8-62b9-4db3-a3c3-fd0daec82a9f] Running
	I0920 19:11:10.358593  302538 system_pods.go:61] "coredns-7c65d6cfc9-h84nm" [6ada3ba7-1ccd-474b-850b-c00a77dfbb92] Running
	I0920 19:11:10.358597  302538 system_pods.go:61] "etcd-no-preload-037711" [9ace2dcd-0562-46d5-99be-65be4ea053d9] Running
	I0920 19:11:10.358601  302538 system_pods.go:61] "kube-apiserver-no-preload-037711" [1dbfa130-d2dd-420d-a32c-1e82b535c112] Running
	I0920 19:11:10.358604  302538 system_pods.go:61] "kube-controller-manager-no-preload-037711" [56462390-dedd-4281-ac85-2671f7a10cb1] Running
	I0920 19:11:10.358607  302538 system_pods.go:61] "kube-proxy-bvfqh" [2170ef3f-58f0-4d42-9f15-d9c952e0e2ec] Running
	I0920 19:11:10.358610  302538 system_pods.go:61] "kube-scheduler-no-preload-037711" [e996ce53-7ee6-4d1d-bd0b-8188d76966b9] Running
	I0920 19:11:10.358617  302538 system_pods.go:61] "metrics-server-6867b74b74-rpfqm" [ba7c8518-6c3e-4751-a9a5-29c77990a29c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:11:10.358620  302538 system_pods.go:61] "storage-provisioner" [e7f05c0a-c6be-4e68-959e-966c17c9cc5e] Running
	I0920 19:11:10.358629  302538 system_pods.go:74] duration metric: took 5.365343ms to wait for pod list to return data ...
	I0920 19:11:10.358635  302538 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:11:10.361229  302538 default_sa.go:45] found service account: "default"
	I0920 19:11:10.361255  302538 default_sa.go:55] duration metric: took 2.612292ms for default service account to be created ...
	I0920 19:11:10.361264  302538 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:11:10.367188  302538 system_pods.go:86] 9 kube-system pods found
	I0920 19:11:10.367221  302538 system_pods.go:89] "coredns-7c65d6cfc9-gdfh9" [61c6d6d8-62b9-4db3-a3c3-fd0daec82a9f] Running
	I0920 19:11:10.367229  302538 system_pods.go:89] "coredns-7c65d6cfc9-h84nm" [6ada3ba7-1ccd-474b-850b-c00a77dfbb92] Running
	I0920 19:11:10.367235  302538 system_pods.go:89] "etcd-no-preload-037711" [9ace2dcd-0562-46d5-99be-65be4ea053d9] Running
	I0920 19:11:10.367241  302538 system_pods.go:89] "kube-apiserver-no-preload-037711" [1dbfa130-d2dd-420d-a32c-1e82b535c112] Running
	I0920 19:11:10.367248  302538 system_pods.go:89] "kube-controller-manager-no-preload-037711" [56462390-dedd-4281-ac85-2671f7a10cb1] Running
	I0920 19:11:10.367254  302538 system_pods.go:89] "kube-proxy-bvfqh" [2170ef3f-58f0-4d42-9f15-d9c952e0e2ec] Running
	I0920 19:11:10.367260  302538 system_pods.go:89] "kube-scheduler-no-preload-037711" [e996ce53-7ee6-4d1d-bd0b-8188d76966b9] Running
	I0920 19:11:10.367267  302538 system_pods.go:89] "metrics-server-6867b74b74-rpfqm" [ba7c8518-6c3e-4751-a9a5-29c77990a29c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:11:10.367273  302538 system_pods.go:89] "storage-provisioner" [e7f05c0a-c6be-4e68-959e-966c17c9cc5e] Running
	I0920 19:11:10.367283  302538 system_pods.go:126] duration metric: took 6.01247ms to wait for k8s-apps to be running ...
	I0920 19:11:10.367292  302538 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:11:10.367354  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:11:10.381551  302538 system_svc.go:56] duration metric: took 14.250301ms WaitForService to wait for kubelet
	I0920 19:11:10.381582  302538 kubeadm.go:582] duration metric: took 10.298492318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:11:10.381601  302538 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:11:10.385405  302538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:11:10.385442  302538 node_conditions.go:123] node cpu capacity is 2
	I0920 19:11:10.385455  302538 node_conditions.go:105] duration metric: took 3.849463ms to run NodePressure ...
	I0920 19:11:10.385468  302538 start.go:241] waiting for startup goroutines ...
	I0920 19:11:10.385474  302538 start.go:246] waiting for cluster config update ...
	I0920 19:11:10.385485  302538 start.go:255] writing updated cluster config ...
	I0920 19:11:10.385786  302538 ssh_runner.go:195] Run: rm -f paused
	I0920 19:11:10.436362  302538 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:11:10.438538  302538 out.go:177] * Done! kubectl is now configured to use "no-preload-037711" cluster and "default" namespace by default
	I0920 19:11:32.301334  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:11:32.302020  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:11:32.302048  303486 kubeadm.go:310] 
	I0920 19:11:32.302147  303486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:11:32.302252  303486 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:11:32.302279  303486 kubeadm.go:310] 
	I0920 19:11:32.302366  303486 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:11:32.302453  303486 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:11:32.302713  303486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:11:32.302731  303486 kubeadm.go:310] 
	I0920 19:11:32.303023  303486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:11:32.303099  303486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:11:32.303200  303486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:11:32.303232  303486 kubeadm.go:310] 
	I0920 19:11:32.303438  303486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:11:32.303669  303486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:11:32.303699  303486 kubeadm.go:310] 
	I0920 19:11:32.303965  303486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:11:32.304199  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:11:32.304410  303486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:11:32.304577  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:11:32.304624  303486 kubeadm.go:310] 
	I0920 19:11:32.305105  303486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:11:32.305465  303486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:11:32.305655  303486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 19:11:32.305713  303486 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 19:11:32.305758  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:11:32.760742  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:11:32.775675  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:11:32.785785  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:11:32.785806  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:11:32.785854  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:11:32.795133  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:11:32.795210  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:11:32.805681  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:11:32.815299  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:11:32.815362  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:11:32.827215  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:11:32.836597  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:11:32.836682  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:11:32.846621  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:11:32.855610  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:11:32.855675  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:11:32.866824  303486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:11:33.103745  303486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:13:29.101212  303486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:13:29.101347  303486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 19:13:29.103031  303486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:13:29.103142  303486 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:13:29.103216  303486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:13:29.103318  303486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:13:29.103437  303486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:13:29.103507  303486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:13:29.105521  303486 out.go:235]   - Generating certificates and keys ...
	I0920 19:13:29.105622  303486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:13:29.105704  303486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:13:29.105820  303486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:13:29.105955  303486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:13:29.106058  303486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:13:29.106132  303486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:13:29.106219  303486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:13:29.106318  303486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:13:29.106430  303486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:13:29.106548  303486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:13:29.106611  303486 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:13:29.106699  303486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:13:29.106766  303486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:13:29.106844  303486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:13:29.106935  303486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:13:29.107011  303486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:13:29.107117  303486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:13:29.107223  303486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:13:29.107289  303486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:13:29.107376  303486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:13:29.108804  303486 out.go:235]   - Booting up control plane ...
	I0920 19:13:29.108952  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:13:29.109021  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:13:29.109082  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:13:29.109166  303486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:13:29.109313  303486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:13:29.109359  303486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:13:29.109462  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.109630  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.109699  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.109878  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.109966  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110133  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110213  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110382  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110441  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110606  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110616  303486 kubeadm.go:310] 
	I0920 19:13:29.110661  303486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:13:29.110699  303486 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:13:29.110706  303486 kubeadm.go:310] 
	I0920 19:13:29.110739  303486 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:13:29.110769  303486 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:13:29.110866  303486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:13:29.110875  303486 kubeadm.go:310] 
	I0920 19:13:29.110969  303486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:13:29.111003  303486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:13:29.111031  303486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:13:29.111037  303486 kubeadm.go:310] 
	I0920 19:13:29.111141  303486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:13:29.111224  303486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:13:29.111231  303486 kubeadm.go:310] 
	I0920 19:13:29.111327  303486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:13:29.111407  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:13:29.111481  303486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:13:29.111542  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:13:29.111610  303486 kubeadm.go:394] duration metric: took 7m56.768319159s to StartCluster
	I0920 19:13:29.111640  303486 kubeadm.go:310] 
	I0920 19:13:29.111664  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:13:29.111734  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:13:29.157817  303486 cri.go:89] found id: ""
	I0920 19:13:29.157849  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.157859  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:13:29.157867  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:13:29.157950  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:13:29.192130  303486 cri.go:89] found id: ""
	I0920 19:13:29.192164  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.192179  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:13:29.192187  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:13:29.192243  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:13:29.227594  303486 cri.go:89] found id: ""
	I0920 19:13:29.227631  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.227642  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:13:29.227651  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:13:29.227724  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:13:29.261948  303486 cri.go:89] found id: ""
	I0920 19:13:29.261981  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.261996  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:13:29.262004  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:13:29.262072  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:13:29.295148  303486 cri.go:89] found id: ""
	I0920 19:13:29.295181  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.295191  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:13:29.295200  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:13:29.295270  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:13:29.328094  303486 cri.go:89] found id: ""
	I0920 19:13:29.328127  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.328135  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:13:29.328142  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:13:29.328194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:13:29.368830  303486 cri.go:89] found id: ""
	I0920 19:13:29.368870  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.368878  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:13:29.368885  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:13:29.368947  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:13:29.420051  303486 cri.go:89] found id: ""
	I0920 19:13:29.420081  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.420091  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:13:29.420106  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:13:29.420123  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:13:29.498322  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:13:29.498350  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:13:29.498364  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:13:29.601796  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:13:29.601842  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:13:29.644325  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:13:29.644368  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:13:29.692691  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:13:29.692736  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0920 19:13:29.707508  303486 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 19:13:29.707577  303486 out.go:270] * 
	W0920 19:13:29.707646  303486 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:13:29.707664  303486 out.go:270] * 
	W0920 19:13:29.708560  303486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 19:13:29.711313  303486 out.go:201] 
	W0920 19:13:29.712520  303486 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:13:29.712553  303486 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 19:13:29.712576  303486 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 19:13:29.713832  303486 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.520111849Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859926520090048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e2d3584-71ee-473f-950a-0845257bd1ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.520692634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa6217d9-5e33-4ba3-8225-390a18e11f6b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.520762974Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa6217d9-5e33-4ba3-8225-390a18e11f6b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.520978222Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e45816fab0057c66cab7339828ff0d85ee0e168cb3929a33625992f24f4f574a,PodSandboxId:91ed3bd98860188faabda2896009e32422d2b2bddd2ca6e91a66e0f3d802b72b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726859127652897795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07eaf378-3cf0-4ff2-9742-d7fa0a2ef5df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f,PodSandboxId:49793a7d58f56568bdfce8f0ef2fc27d628ac9dc830eb4751ea37df2d70cb7ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859124169687136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-427x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b87f9f-4697-4d76-aed1-3d54720172c6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859117150108869,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726859116458648062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4,PodSandboxId:59d696c615a6136109be7b56bc4b65a45c328d0dee39e0252594e74c8eab66f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726859116424719189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zp8l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fe30e51-ef3f-4448-916a
-8ad75832b207,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862,PodSandboxId:47652a84f58bf0414a4ed6dee54f09aa0fc0b390d0d469df5415a941b6390f4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859112692847849,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ef7803d3c0b8d4bcc4f
cc2c5dc783a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba,PodSandboxId:e3726bea5b79f107a2daee48e8792ee710f3ba45b5908af8cbe2a27e892e2267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859112742677668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e84b82e8bad235e9885f342d9fca6313,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281,PodSandboxId:90cff8f0e4e7881bc8ac4f75ab7c770e5f4aadfd26e6957301b4078fb37856c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859112712192701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2692a2a39fbd70db2aa422a84035
be53,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f,PodSandboxId:9db4832ae0433b55edaff88b1e24188886c84cf4ab2f05e8986d4888f5577a28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859112723372867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2124512131de5a1d81554836ebcef0
52,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa6217d9-5e33-4ba3-8225-390a18e11f6b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.559266963Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e706baf-5dfc-463a-8dcf-96bd9f6c5e6d name=/runtime.v1.RuntimeService/Version
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.559342540Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e706baf-5dfc-463a-8dcf-96bd9f6c5e6d name=/runtime.v1.RuntimeService/Version
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.560252426Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93b1e4c9-6e83-4eef-a8bb-d7a2b74d3672 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.560717879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859926560686163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93b1e4c9-6e83-4eef-a8bb-d7a2b74d3672 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.561287497Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a201cbb-91ca-4787-8d50-b8d9ce5883dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.561350479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a201cbb-91ca-4787-8d50-b8d9ce5883dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.561537647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e45816fab0057c66cab7339828ff0d85ee0e168cb3929a33625992f24f4f574a,PodSandboxId:91ed3bd98860188faabda2896009e32422d2b2bddd2ca6e91a66e0f3d802b72b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726859127652897795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07eaf378-3cf0-4ff2-9742-d7fa0a2ef5df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f,PodSandboxId:49793a7d58f56568bdfce8f0ef2fc27d628ac9dc830eb4751ea37df2d70cb7ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859124169687136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-427x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b87f9f-4697-4d76-aed1-3d54720172c6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859117150108869,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726859116458648062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4,PodSandboxId:59d696c615a6136109be7b56bc4b65a45c328d0dee39e0252594e74c8eab66f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726859116424719189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zp8l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fe30e51-ef3f-4448-916a
-8ad75832b207,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862,PodSandboxId:47652a84f58bf0414a4ed6dee54f09aa0fc0b390d0d469df5415a941b6390f4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859112692847849,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ef7803d3c0b8d4bcc4f
cc2c5dc783a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba,PodSandboxId:e3726bea5b79f107a2daee48e8792ee710f3ba45b5908af8cbe2a27e892e2267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859112742677668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e84b82e8bad235e9885f342d9fca6313,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281,PodSandboxId:90cff8f0e4e7881bc8ac4f75ab7c770e5f4aadfd26e6957301b4078fb37856c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859112712192701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2692a2a39fbd70db2aa422a84035
be53,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f,PodSandboxId:9db4832ae0433b55edaff88b1e24188886c84cf4ab2f05e8986d4888f5577a28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859112723372867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2124512131de5a1d81554836ebcef0
52,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a201cbb-91ca-4787-8d50-b8d9ce5883dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.605296861Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1480c6e0-7da5-4eda-a264-55ddda21e75f name=/runtime.v1.RuntimeService/Version
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.605570911Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1480c6e0-7da5-4eda-a264-55ddda21e75f name=/runtime.v1.RuntimeService/Version
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.607018435Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d5d4110d-6bc9-4a70-81ff-4110426c53b6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.607485871Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859926607461354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5d4110d-6bc9-4a70-81ff-4110426c53b6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.608350935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4dc8c50d-076e-42b5-a8c1-78053112f588 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.608435655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4dc8c50d-076e-42b5-a8c1-78053112f588 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.608811851Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e45816fab0057c66cab7339828ff0d85ee0e168cb3929a33625992f24f4f574a,PodSandboxId:91ed3bd98860188faabda2896009e32422d2b2bddd2ca6e91a66e0f3d802b72b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726859127652897795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07eaf378-3cf0-4ff2-9742-d7fa0a2ef5df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f,PodSandboxId:49793a7d58f56568bdfce8f0ef2fc27d628ac9dc830eb4751ea37df2d70cb7ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859124169687136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-427x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b87f9f-4697-4d76-aed1-3d54720172c6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859117150108869,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726859116458648062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4,PodSandboxId:59d696c615a6136109be7b56bc4b65a45c328d0dee39e0252594e74c8eab66f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726859116424719189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zp8l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fe30e51-ef3f-4448-916a
-8ad75832b207,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862,PodSandboxId:47652a84f58bf0414a4ed6dee54f09aa0fc0b390d0d469df5415a941b6390f4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859112692847849,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ef7803d3c0b8d4bcc4f
cc2c5dc783a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba,PodSandboxId:e3726bea5b79f107a2daee48e8792ee710f3ba45b5908af8cbe2a27e892e2267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859112742677668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e84b82e8bad235e9885f342d9fca6313,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281,PodSandboxId:90cff8f0e4e7881bc8ac4f75ab7c770e5f4aadfd26e6957301b4078fb37856c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859112712192701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2692a2a39fbd70db2aa422a84035
be53,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f,PodSandboxId:9db4832ae0433b55edaff88b1e24188886c84cf4ab2f05e8986d4888f5577a28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859112723372867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2124512131de5a1d81554836ebcef0
52,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4dc8c50d-076e-42b5-a8c1-78053112f588 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.643872275Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41cb63f9-bc9d-4db7-b607-f713d7f0c3f4 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.643944646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41cb63f9-bc9d-4db7-b607-f713d7f0c3f4 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.645458592Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29419d65-46c6-4642-932d-864a4070b8b7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.645906477Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859926645882166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29419d65-46c6-4642-932d-864a4070b8b7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.646558009Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a044440b-6dce-41cd-acf9-c223dae4b1c7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.646670792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a044440b-6dce-41cd-acf9-c223dae4b1c7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:18:46 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:18:46.646901828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e45816fab0057c66cab7339828ff0d85ee0e168cb3929a33625992f24f4f574a,PodSandboxId:91ed3bd98860188faabda2896009e32422d2b2bddd2ca6e91a66e0f3d802b72b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726859127652897795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07eaf378-3cf0-4ff2-9742-d7fa0a2ef5df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f,PodSandboxId:49793a7d58f56568bdfce8f0ef2fc27d628ac9dc830eb4751ea37df2d70cb7ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859124169687136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-427x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b87f9f-4697-4d76-aed1-3d54720172c6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859117150108869,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726859116458648062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4,PodSandboxId:59d696c615a6136109be7b56bc4b65a45c328d0dee39e0252594e74c8eab66f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726859116424719189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zp8l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fe30e51-ef3f-4448-916a
-8ad75832b207,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862,PodSandboxId:47652a84f58bf0414a4ed6dee54f09aa0fc0b390d0d469df5415a941b6390f4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859112692847849,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ef7803d3c0b8d4bcc4f
cc2c5dc783a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba,PodSandboxId:e3726bea5b79f107a2daee48e8792ee710f3ba45b5908af8cbe2a27e892e2267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859112742677668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e84b82e8bad235e9885f342d9fca6313,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281,PodSandboxId:90cff8f0e4e7881bc8ac4f75ab7c770e5f4aadfd26e6957301b4078fb37856c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859112712192701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2692a2a39fbd70db2aa422a84035
be53,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f,PodSandboxId:9db4832ae0433b55edaff88b1e24188886c84cf4ab2f05e8986d4888f5577a28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859112723372867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2124512131de5a1d81554836ebcef0
52,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a044440b-6dce-41cd-acf9-c223dae4b1c7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e45816fab0057       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   91ed3bd988601       busybox
	88f0364540083       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   49793a7d58f56       coredns-7c65d6cfc9-427x2
	a77d8a3964187       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   c438af92b37cb       storage-provisioner
	0d20ef881ab96       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   c438af92b37cb       storage-provisioner
	3591419c15d21       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   59d696c615a61       kube-proxy-zp8l5
	9a3d66bde4ebb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   e3726bea5b79f       kube-controller-manager-default-k8s-diff-port-612312
	f9971978cdd07       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   9db4832ae0433       kube-apiserver-default-k8s-diff-port-612312
	5422e85be2062       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   90cff8f0e4e78       etcd-default-k8s-diff-port-612312
	25594230a0a82       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   47652a84f58bf       kube-scheduler-default-k8s-diff-port-612312
	
	
	==> coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45063 - 25900 "HINFO IN 2430839124469883219.1019100563124711711. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016951084s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-612312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-612312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=default-k8s-diff-port-612312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_57_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:57:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-612312
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:18:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:15:58 +0000   Fri, 20 Sep 2024 18:57:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:15:58 +0000   Fri, 20 Sep 2024 18:57:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:15:58 +0000   Fri, 20 Sep 2024 18:57:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:15:58 +0000   Fri, 20 Sep 2024 19:05:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.230
	  Hostname:    default-k8s-diff-port-612312
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b60cadd04c8448ca885f0a11b869fa62
	  System UUID:                b60cadd0-4c84-48ca-885f-0a11b869fa62
	  Boot ID:                    477db3ab-d4f7-4411-8b51-5bfccc5662b2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-7c65d6cfc9-427x2                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-612312                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-612312             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-612312    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-zp8l5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-612312             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-2tnqc                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-612312 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-612312 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-612312 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-612312 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-612312 event: Registered Node default-k8s-diff-port-612312 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-612312 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-612312 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-612312 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-612312 event: Registered Node default-k8s-diff-port-612312 in Controller
	
	
	==> dmesg <==
	[Sep20 19:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060784] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037864] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.920706] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.934068] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.600085] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep20 19:05] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.068713] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068845] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.184881] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.116235] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.289531] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[  +4.111901] systemd-fstab-generator[787]: Ignoring "noauto" option for root device
	[  +1.866213] systemd-fstab-generator[908]: Ignoring "noauto" option for root device
	[  +0.064762] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.530202] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.400437] systemd-fstab-generator[1588]: Ignoring "noauto" option for root device
	[  +3.328604] kauditd_printk_skb: 71 callbacks suppressed
	[  +5.591392] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] <==
	{"level":"warn","ts":"2024-09-20T19:05:31.535776Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"320.573762ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13914594599670812824 > lease_revoke:<id:411a9210cbc3db03>","response":"size:28"}
	{"level":"info","ts":"2024-09-20T19:05:31.535990Z","caller":"traceutil/trace.go:171","msg":"trace[125152882] linearizableReadLoop","detail":"{readStateIndex:607; appliedIndex:606; }","duration":"445.340725ms","start":"2024-09-20T19:05:31.090634Z","end":"2024-09-20T19:05:31.535975Z","steps":["trace[125152882] 'read index received'  (duration: 124.410457ms)","trace[125152882] 'applied index is now lower than readState.Index'  (duration: 320.928805ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T19:05:31.536167Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"444.63127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-612312\" ","response":"range_response_count:1 size:5537"}
	{"level":"info","ts":"2024-09-20T19:05:31.536236Z","caller":"traceutil/trace.go:171","msg":"trace[320348027] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-612312; range_end:; response_count:1; response_revision:574; }","duration":"444.7099ms","start":"2024-09-20T19:05:31.091516Z","end":"2024-09-20T19:05:31.536226Z","steps":["trace[320348027] 'agreement among raft nodes before linearized reading'  (duration: 444.568863ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:05:31.536289Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:05:31.091488Z","time spent":"444.791562ms","remote":"127.0.0.1:47768","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5560,"request content":"key:\"/registry/minions/default-k8s-diff-port-612312\" "}
	{"level":"warn","ts":"2024-09-20T19:05:31.536167Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"445.518719ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-612312\" ","response":"range_response_count:1 size:5919"}
	{"level":"info","ts":"2024-09-20T19:05:31.536512Z","caller":"traceutil/trace.go:171","msg":"trace[920165841] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-612312; range_end:; response_count:1; response_revision:574; }","duration":"445.869155ms","start":"2024-09-20T19:05:31.090629Z","end":"2024-09-20T19:05:31.536498Z","steps":["trace[920165841] 'agreement among raft nodes before linearized reading'  (duration: 445.419722ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:05:31.536559Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:05:31.090554Z","time spent":"445.994061ms","remote":"127.0.0.1:47782","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":5942,"request content":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-612312\" "}
	{"level":"info","ts":"2024-09-20T19:05:31.834381Z","caller":"traceutil/trace.go:171","msg":"trace[588460150] linearizableReadLoop","detail":"{readStateIndex:608; appliedIndex:607; }","duration":"289.001785ms","start":"2024-09-20T19:05:31.545361Z","end":"2024-09-20T19:05:31.834363Z","steps":["trace[588460150] 'read index received'  (duration: 288.816337ms)","trace[588460150] 'applied index is now lower than readState.Index'  (duration: 184.862µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T19:05:31.834719Z","caller":"traceutil/trace.go:171","msg":"trace[163714795] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"289.691398ms","start":"2024-09-20T19:05:31.545012Z","end":"2024-09-20T19:05:31.834703Z","steps":["trace[163714795] 'process raft request'  (duration: 289.208961ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:05:31.834825Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.411952ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T19:05:31.835475Z","caller":"traceutil/trace.go:171","msg":"trace[405837260] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:575; }","duration":"240.074628ms","start":"2024-09-20T19:05:31.595389Z","end":"2024-09-20T19:05:31.835463Z","steps":["trace[405837260] 'agreement among raft nodes before linearized reading'  (duration: 239.390651ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:05:31.834962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.593975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-612312\" ","response":"range_response_count:1 size:5537"}
	{"level":"info","ts":"2024-09-20T19:05:31.836077Z","caller":"traceutil/trace.go:171","msg":"trace[250580635] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-612312; range_end:; response_count:1; response_revision:575; }","duration":"290.708759ms","start":"2024-09-20T19:05:31.545357Z","end":"2024-09-20T19:05:31.836065Z","steps":["trace[250580635] 'agreement among raft nodes before linearized reading'  (duration: 289.493163ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:05:31.835005Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.204984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2024-09-20T19:05:31.836303Z","caller":"traceutil/trace.go:171","msg":"trace[499307255] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:575; }","duration":"171.498311ms","start":"2024-09-20T19:05:31.664794Z","end":"2024-09-20T19:05:31.836292Z","steps":["trace[499307255] 'agreement among raft nodes before linearized reading'  (duration: 170.185342ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:05:32.057787Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.935365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-612312\" ","response":"range_response_count:1 size:5747"}
	{"level":"info","ts":"2024-09-20T19:05:32.057863Z","caller":"traceutil/trace.go:171","msg":"trace[309258126] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-612312; range_end:; response_count:1; response_revision:575; }","duration":"122.02661ms","start":"2024-09-20T19:05:31.935822Z","end":"2024-09-20T19:05:32.057849Z","steps":["trace[309258126] 'range keys from in-memory index tree'  (duration: 121.81322ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:05:32.318761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.392799ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13914594599670812843 > lease_revoke:<id:411a9210d2f28f6e>","response":"size:28"}
	{"level":"info","ts":"2024-09-20T19:05:32.318855Z","caller":"traceutil/trace.go:171","msg":"trace[1842132406] linearizableReadLoop","detail":"{readStateIndex:609; appliedIndex:608; }","duration":"178.115006ms","start":"2024-09-20T19:05:32.140726Z","end":"2024-09-20T19:05:32.318841Z","steps":["trace[1842132406] 'read index received'  (duration: 48.581891ms)","trace[1842132406] 'applied index is now lower than readState.Index'  (duration: 129.531985ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T19:05:32.319008Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.267305ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-2tnqc\" ","response":"range_response_count:1 size:4396"}
	{"level":"info","ts":"2024-09-20T19:05:32.319031Z","caller":"traceutil/trace.go:171","msg":"trace[1114442545] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-2tnqc; range_end:; response_count:1; response_revision:575; }","duration":"178.303805ms","start":"2024-09-20T19:05:32.140720Z","end":"2024-09-20T19:05:32.319024Z","steps":["trace[1114442545] 'agreement among raft nodes before linearized reading'  (duration: 178.157287ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:15:14.344072Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":828}
	{"level":"info","ts":"2024-09-20T19:15:14.355039Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":828,"took":"10.567977ms","hash":2083178493,"current-db-size-bytes":2584576,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2584576,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-09-20T19:15:14.355101Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2083178493,"revision":828,"compact-revision":-1}
	
	
	==> kernel <==
	 19:18:46 up 14 min,  0 users,  load average: 0.10, 0.19, 0.18
	Linux default-k8s-diff-port-612312 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] <==
	W0920 19:15:16.623401       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:15:16.623449       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 19:15:16.624560       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:15:16.624690       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 19:16:16.625278       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:16:16.625380       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 19:16:16.625434       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:16:16.625479       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 19:16:16.626642       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:16:16.626711       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 19:18:16.627781       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:18:16.627871       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 19:18:16.627928       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:18:16.627991       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 19:18:16.629125       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:18:16.629175       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] <==
	E0920 19:13:19.231052       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:13:19.835991       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:13:49.237901       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:13:49.844259       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:14:19.246262       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:14:19.854908       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:14:49.252844       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:14:49.863179       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:15:19.259205       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:15:19.871159       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:15:49.267274       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:15:49.881454       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 19:15:58.915384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-612312"
	E0920 19:16:19.275009       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:16:19.890299       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 19:16:24.096270       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="212.447µs"
	I0920 19:16:36.088372       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="55.07µs"
	E0920 19:16:49.280873       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:16:49.897841       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:17:19.287448       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:17:19.905357       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:17:49.293539       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:17:49.914057       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:18:19.300920       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:18:19.922318       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 19:05:16.724297       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 19:05:16.740718       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.230"]
	E0920 19:05:16.740803       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:05:16.811167       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 19:05:16.811280       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 19:05:16.811326       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:05:16.819725       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:05:16.820075       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:05:16.820100       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:05:16.821744       1 config.go:199] "Starting service config controller"
	I0920 19:05:16.821800       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:05:16.821847       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:05:16.821854       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:05:16.823470       1 config.go:328] "Starting node config controller"
	I0920 19:05:16.823491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:05:16.922754       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 19:05:16.922819       1 shared_informer.go:320] Caches are synced for service config
	I0920 19:05:16.923746       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] <==
	I0920 19:05:13.773666       1 serving.go:386] Generated self-signed cert in-memory
	W0920 19:05:15.580225       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 19:05:15.580351       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 19:05:15.580381       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 19:05:15.580444       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 19:05:15.633085       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 19:05:15.635645       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:05:15.639770       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 19:05:15.639928       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 19:05:15.639978       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 19:05:15.640014       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 19:05:15.740811       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:17:40 default-k8s-diff-port-612312 kubelet[915]: E0920 19:17:40.073001     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2tnqc" podUID="35ce9a11-e606-41da-84bf-b3c5e9a18245"
	Sep 20 19:17:41 default-k8s-diff-port-612312 kubelet[915]: E0920 19:17:41.231770     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859861231403529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:17:41 default-k8s-diff-port-612312 kubelet[915]: E0920 19:17:41.231807     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859861231403529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:17:51 default-k8s-diff-port-612312 kubelet[915]: E0920 19:17:51.073127     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2tnqc" podUID="35ce9a11-e606-41da-84bf-b3c5e9a18245"
	Sep 20 19:17:51 default-k8s-diff-port-612312 kubelet[915]: E0920 19:17:51.234152     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859871233685567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:17:51 default-k8s-diff-port-612312 kubelet[915]: E0920 19:17:51.234356     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859871233685567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:01 default-k8s-diff-port-612312 kubelet[915]: E0920 19:18:01.236449     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859881236107001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:01 default-k8s-diff-port-612312 kubelet[915]: E0920 19:18:01.236880     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859881236107001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:06 default-k8s-diff-port-612312 kubelet[915]: E0920 19:18:06.073241     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2tnqc" podUID="35ce9a11-e606-41da-84bf-b3c5e9a18245"
	Sep 20 19:18:11 default-k8s-diff-port-612312 kubelet[915]: E0920 19:18:11.091227     915 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 19:18:11 default-k8s-diff-port-612312 kubelet[915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 19:18:11 default-k8s-diff-port-612312 kubelet[915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 19:18:11 default-k8s-diff-port-612312 kubelet[915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 19:18:11 default-k8s-diff-port-612312 kubelet[915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 19:18:11 default-k8s-diff-port-612312 kubelet[915]: E0920 19:18:11.239646     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859891239223528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:11 default-k8s-diff-port-612312 kubelet[915]: E0920 19:18:11.239709     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859891239223528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:18 default-k8s-diff-port-612312 kubelet[915]: E0920 19:18:18.073458     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2tnqc" podUID="35ce9a11-e606-41da-84bf-b3c5e9a18245"
	Sep 20 19:18:21 default-k8s-diff-port-612312 kubelet[915]: E0920 19:18:21.242030     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859901241521966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:21 default-k8s-diff-port-612312 kubelet[915]: E0920 19:18:21.242074     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859901241521966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:29 default-k8s-diff-port-612312 kubelet[915]: E0920 19:18:29.073496     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2tnqc" podUID="35ce9a11-e606-41da-84bf-b3c5e9a18245"
	Sep 20 19:18:31 default-k8s-diff-port-612312 kubelet[915]: E0920 19:18:31.243795     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859911243402512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:31 default-k8s-diff-port-612312 kubelet[915]: E0920 19:18:31.244157     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859911243402512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:41 default-k8s-diff-port-612312 kubelet[915]: E0920 19:18:41.246007     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859921245680874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:41 default-k8s-diff-port-612312 kubelet[915]: E0920 19:18:41.246032     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859921245680874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:43 default-k8s-diff-port-612312 kubelet[915]: E0920 19:18:43.074931     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2tnqc" podUID="35ce9a11-e606-41da-84bf-b3c5e9a18245"
	
	
	==> storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] <==
	I0920 19:05:16.613454       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0920 19:05:16.624737       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] <==
	I0920 19:05:17.278651       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 19:05:17.287139       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 19:05:17.287223       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 19:05:34.871916       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 19:05:34.872182       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-612312_fe527229-f2cc-48af-acbb-f24b59897505!
	I0920 19:05:34.872906       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bf5abdb5-77a8-4a50-ac0c-3da169d5f861", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-612312_fe527229-f2cc-48af-acbb-f24b59897505 became leader
	I0920 19:05:34.973220       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-612312_fe527229-f2cc-48af-acbb-f24b59897505!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-612312 -n default-k8s-diff-port-612312
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-612312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-2tnqc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-612312 describe pod metrics-server-6867b74b74-2tnqc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-612312 describe pod metrics-server-6867b74b74-2tnqc: exit status 1 (65.414309ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-2tnqc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-612312 describe pod metrics-server-6867b74b74-2tnqc: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0920 19:10:13.963787  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:50.144635  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:51.354736  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-339897 -n embed-certs-339897
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-20 19:18:59.358999959 +0000 UTC m=+6202.019446606
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-339897 -n embed-certs-339897
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-339897 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-339897 logs -n 25: (2.221033471s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-793540 sudo cat                             | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo find                            | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo crio                            | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-793540                                      | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-896665 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | disable-driver-mounts-896665                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:57 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-037711             | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-339897            | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-612312  | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-037711                  | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC | 20 Sep 24 19:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-339897                 | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-425599        | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612312       | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-425599             | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:01:28
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:01:28.948776  303486 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:01:28.948894  303486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:01:28.948900  303486 out.go:358] Setting ErrFile to fd 2...
	I0920 19:01:28.948906  303486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:01:28.949090  303486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 19:01:28.949637  303486 out.go:352] Setting JSON to false
	I0920 19:01:28.950705  303486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9832,"bootTime":1726849057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:01:28.950809  303486 start.go:139] virtualization: kvm guest
	I0920 19:01:28.953226  303486 out.go:177] * [old-k8s-version-425599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:01:28.955013  303486 notify.go:220] Checking for updates...
	I0920 19:01:28.955065  303486 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 19:01:28.956932  303486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:01:28.959076  303486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:01:28.961116  303486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 19:01:28.963396  303486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:01:28.965428  303486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:01:28.967688  303486 config.go:182] Loaded profile config "old-k8s-version-425599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:01:28.968112  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:01:28.968175  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:01:28.984002  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
	I0920 19:01:28.984552  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:01:28.985260  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:01:28.985291  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:01:28.985715  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:01:28.985972  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:01:28.988070  303486 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 19:01:28.989565  303486 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:01:28.990007  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:01:28.990079  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:01:29.006020  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34059
	I0920 19:01:29.006492  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:01:29.007046  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:01:29.007078  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:01:29.007441  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:01:29.007706  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:01:29.049785  303486 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 19:01:29.051185  303486 start.go:297] selected driver: kvm2
	I0920 19:01:29.051206  303486 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:01:29.051323  303486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:01:29.052030  303486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:01:29.052131  303486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:01:29.068826  303486 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:01:29.069232  303486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:01:29.069262  303486 cni.go:84] Creating CNI manager for ""
	I0920 19:01:29.069297  303486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:01:29.069333  303486 start.go:340] cluster config:
	{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:01:29.069439  303486 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:01:29.071617  303486 out.go:177] * Starting "old-k8s-version-425599" primary control-plane node in "old-k8s-version-425599" cluster
	I0920 19:01:27.086248  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:29.073133  303486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:01:29.073174  303486 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 19:01:29.073182  303486 cache.go:56] Caching tarball of preloaded images
	I0920 19:01:29.073269  303486 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:01:29.073285  303486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 19:01:29.073388  303486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 19:01:29.073573  303486 start.go:360] acquireMachinesLock for old-k8s-version-425599: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:01:33.166258  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:36.238261  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:42.318195  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:45.390223  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:51.470272  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:54.542277  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:00.622232  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:03.694275  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:09.774241  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:12.846248  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:18.926213  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:21.998195  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:28.078192  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:31.150239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:37.230160  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:40.302224  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:46.382225  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:49.454205  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:55.534186  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:58.606232  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:04.686254  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:07.758234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:13.838239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:16.910321  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:22.990234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:26.062339  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:32.142210  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:35.214256  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:41.294234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:44.366288  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:50.446215  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:53.518266  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:59.598190  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:02.670240  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:08.750179  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:11.822239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:17.902176  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:20.974235  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:23.977804  302869 start.go:364] duration metric: took 4m19.519175605s to acquireMachinesLock for "embed-certs-339897"
	I0920 19:04:23.977868  302869 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:04:23.977876  302869 fix.go:54] fixHost starting: 
	I0920 19:04:23.978233  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:04:23.978277  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:04:23.993804  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0920 19:04:23.994326  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:04:23.994906  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:04:23.994925  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:04:23.995219  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:04:23.995413  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:23.995575  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:04:23.997417  302869 fix.go:112] recreateIfNeeded on embed-certs-339897: state=Stopped err=<nil>
	I0920 19:04:23.997439  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	W0920 19:04:23.997636  302869 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:04:24.001021  302869 out.go:177] * Restarting existing kvm2 VM for "embed-certs-339897" ...
	I0920 19:04:24.002636  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Start
	I0920 19:04:24.002842  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring networks are active...
	I0920 19:04:24.003916  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring network default is active
	I0920 19:04:24.004282  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring network mk-embed-certs-339897 is active
	I0920 19:04:24.004647  302869 main.go:141] libmachine: (embed-certs-339897) Getting domain xml...
	I0920 19:04:24.005446  302869 main.go:141] libmachine: (embed-certs-339897) Creating domain...
	I0920 19:04:23.975096  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:04:23.975155  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:04:23.975457  302538 buildroot.go:166] provisioning hostname "no-preload-037711"
	I0920 19:04:23.975485  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:04:23.975712  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:04:23.977607  302538 machine.go:96] duration metric: took 4m37.412034117s to provisionDockerMachine
	I0920 19:04:23.977703  302538 fix.go:56] duration metric: took 4m37.437032108s for fixHost
	I0920 19:04:23.977718  302538 start.go:83] releasing machines lock for "no-preload-037711", held for 4m37.437081737s
	W0920 19:04:23.977745  302538 start.go:714] error starting host: provision: host is not running
	W0920 19:04:23.977850  302538 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 19:04:23.977859  302538 start.go:729] Will try again in 5 seconds ...
	I0920 19:04:25.258221  302869 main.go:141] libmachine: (embed-certs-339897) Waiting to get IP...
	I0920 19:04:25.259119  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.259493  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.259584  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.259481  304091 retry.go:31] will retry after 212.462393ms: waiting for machine to come up
	I0920 19:04:25.474057  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.474524  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.474564  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.474441  304091 retry.go:31] will retry after 306.01691ms: waiting for machine to come up
	I0920 19:04:25.782264  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.782729  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.782753  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.782706  304091 retry.go:31] will retry after 416.637796ms: waiting for machine to come up
	I0920 19:04:26.201336  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:26.201704  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:26.201738  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:26.201645  304091 retry.go:31] will retry after 583.373452ms: waiting for machine to come up
	I0920 19:04:26.786448  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:26.786854  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:26.786876  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:26.786807  304091 retry.go:31] will retry after 760.706965ms: waiting for machine to come up
	I0920 19:04:27.548786  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:27.549126  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:27.549149  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:27.549088  304091 retry.go:31] will retry after 615.829194ms: waiting for machine to come up
	I0920 19:04:28.167061  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:28.167601  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:28.167647  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:28.167419  304091 retry.go:31] will retry after 786.700064ms: waiting for machine to come up
	I0920 19:04:28.955294  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:28.955658  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:28.955685  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:28.955611  304091 retry.go:31] will retry after 1.309567829s: waiting for machine to come up
	I0920 19:04:28.979506  302538 start.go:360] acquireMachinesLock for no-preload-037711: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:04:30.267104  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:30.267645  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:30.267676  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:30.267583  304091 retry.go:31] will retry after 1.153396834s: waiting for machine to come up
	I0920 19:04:31.423030  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:31.423604  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:31.423629  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:31.423542  304091 retry.go:31] will retry after 1.858288741s: waiting for machine to come up
	I0920 19:04:33.284886  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:33.285381  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:33.285419  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:33.285334  304091 retry.go:31] will retry after 2.343802005s: waiting for machine to come up
	I0920 19:04:35.630962  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:35.631408  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:35.631439  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:35.631359  304091 retry.go:31] will retry after 2.42254126s: waiting for machine to come up
	I0920 19:04:38.055128  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:38.055796  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:38.055854  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:38.055732  304091 retry.go:31] will retry after 3.877296828s: waiting for machine to come up
	I0920 19:04:43.362725  303063 start.go:364] duration metric: took 4m20.211671699s to acquireMachinesLock for "default-k8s-diff-port-612312"
	I0920 19:04:43.362794  303063 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:04:43.362810  303063 fix.go:54] fixHost starting: 
	I0920 19:04:43.363257  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:04:43.363315  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:04:43.380877  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34751
	I0920 19:04:43.381399  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:04:43.381894  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:04:43.381933  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:04:43.382364  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:04:43.382596  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:04:43.382746  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:04:43.384351  303063 fix.go:112] recreateIfNeeded on default-k8s-diff-port-612312: state=Stopped err=<nil>
	I0920 19:04:43.384379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	W0920 19:04:43.384540  303063 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:04:43.386969  303063 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-612312" ...
	I0920 19:04:41.936215  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.936789  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has current primary IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.936811  302869 main.go:141] libmachine: (embed-certs-339897) Found IP for machine: 192.168.72.72
	I0920 19:04:41.936823  302869 main.go:141] libmachine: (embed-certs-339897) Reserving static IP address...
	I0920 19:04:41.937386  302869 main.go:141] libmachine: (embed-certs-339897) Reserved static IP address: 192.168.72.72
	I0920 19:04:41.937412  302869 main.go:141] libmachine: (embed-certs-339897) Waiting for SSH to be available...
	I0920 19:04:41.937435  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "embed-certs-339897", mac: "52:54:00:dc:b1:41", ip: "192.168.72.72"} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:41.937466  302869 main.go:141] libmachine: (embed-certs-339897) DBG | skip adding static IP to network mk-embed-certs-339897 - found existing host DHCP lease matching {name: "embed-certs-339897", mac: "52:54:00:dc:b1:41", ip: "192.168.72.72"}
	I0920 19:04:41.937481  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Getting to WaitForSSH function...
	I0920 19:04:41.939673  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.940065  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:41.940089  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.940196  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Using SSH client type: external
	I0920 19:04:41.940223  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa (-rw-------)
	I0920 19:04:41.940261  302869 main.go:141] libmachine: (embed-certs-339897) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:04:41.940274  302869 main.go:141] libmachine: (embed-certs-339897) DBG | About to run SSH command:
	I0920 19:04:41.940285  302869 main.go:141] libmachine: (embed-certs-339897) DBG | exit 0
	I0920 19:04:42.065967  302869 main.go:141] libmachine: (embed-certs-339897) DBG | SSH cmd err, output: <nil>: 
	I0920 19:04:42.066357  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetConfigRaw
	I0920 19:04:42.067004  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:42.069586  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.069937  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.069968  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.070208  302869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/config.json ...
	I0920 19:04:42.070452  302869 machine.go:93] provisionDockerMachine start ...
	I0920 19:04:42.070478  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:42.070687  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.072878  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.073340  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.073375  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.073501  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.073701  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.073899  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.074080  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.074254  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.074504  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.074523  302869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:04:42.182250  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:04:42.182287  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.182543  302869 buildroot.go:166] provisioning hostname "embed-certs-339897"
	I0920 19:04:42.182570  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.182818  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.185497  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.185850  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.185886  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.186069  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.186274  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.186421  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.186568  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.186770  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.186986  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.187006  302869 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-339897 && echo "embed-certs-339897" | sudo tee /etc/hostname
	I0920 19:04:42.307656  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-339897
	
	I0920 19:04:42.307700  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.310572  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.310943  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.310970  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.311184  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.311382  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.311534  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.311663  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.311810  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.311984  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.312003  302869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-339897' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-339897/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-339897' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:04:42.426403  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:04:42.426440  302869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:04:42.426493  302869 buildroot.go:174] setting up certificates
	I0920 19:04:42.426502  302869 provision.go:84] configureAuth start
	I0920 19:04:42.426513  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.426822  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:42.429708  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.430134  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.430170  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.430328  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.432799  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.433222  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.433251  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.433383  302869 provision.go:143] copyHostCerts
	I0920 19:04:42.433466  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:04:42.433487  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:04:42.433549  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:04:42.433644  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:04:42.433652  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:04:42.433678  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:04:42.433735  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:04:42.433742  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:04:42.433762  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:04:42.433811  302869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.embed-certs-339897 san=[127.0.0.1 192.168.72.72 embed-certs-339897 localhost minikube]
	I0920 19:04:42.745528  302869 provision.go:177] copyRemoteCerts
	I0920 19:04:42.745599  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:04:42.745633  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.748247  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.748587  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.748619  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.748811  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.749014  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.749201  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.749334  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:42.831927  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:04:42.855674  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:04:42.879114  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 19:04:42.902982  302869 provision.go:87] duration metric: took 476.462339ms to configureAuth
	I0920 19:04:42.903019  302869 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:04:42.903236  302869 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:04:42.903321  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.906208  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.906580  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.906613  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.906810  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.907006  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.907136  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.907263  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.907427  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.907601  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.907616  302869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:04:43.127800  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:04:43.127847  302869 machine.go:96] duration metric: took 1.057372659s to provisionDockerMachine
	I0920 19:04:43.127864  302869 start.go:293] postStartSetup for "embed-certs-339897" (driver="kvm2")
	I0920 19:04:43.127890  302869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:04:43.127917  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.128263  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:04:43.128298  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.131648  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.132138  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.132173  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.132340  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.132560  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.132739  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.132896  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.216646  302869 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:04:43.220513  302869 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:04:43.220548  302869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:04:43.220629  302869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:04:43.220734  302869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:04:43.220862  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:04:43.230506  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:04:43.252894  302869 start.go:296] duration metric: took 125.003067ms for postStartSetup
	I0920 19:04:43.252943  302869 fix.go:56] duration metric: took 19.275066559s for fixHost
	I0920 19:04:43.252971  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.255999  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.256378  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.256406  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.256634  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.256858  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.257047  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.257214  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.257382  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:43.257546  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:43.257556  302869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:04:43.362516  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859083.339291891
	
	I0920 19:04:43.362545  302869 fix.go:216] guest clock: 1726859083.339291891
	I0920 19:04:43.362553  302869 fix.go:229] Guest: 2024-09-20 19:04:43.339291891 +0000 UTC Remote: 2024-09-20 19:04:43.25294824 +0000 UTC m=+278.942139838 (delta=86.343651ms)
	I0920 19:04:43.362585  302869 fix.go:200] guest clock delta is within tolerance: 86.343651ms
	I0920 19:04:43.362591  302869 start.go:83] releasing machines lock for "embed-certs-339897", held for 19.38474105s
	I0920 19:04:43.362620  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.362970  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:43.365988  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.366359  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.366380  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.366610  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367130  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367326  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367423  302869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:04:43.367469  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.367602  302869 ssh_runner.go:195] Run: cat /version.json
	I0920 19:04:43.367628  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.370233  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370594  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.370624  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370649  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370804  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.370998  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.371169  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.371191  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.371249  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.371406  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.371470  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.371566  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.371721  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.371885  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.490023  302869 ssh_runner.go:195] Run: systemctl --version
	I0920 19:04:43.496615  302869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:04:43.643493  302869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:04:43.649492  302869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:04:43.649560  302869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:04:43.665423  302869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:04:43.665460  302869 start.go:495] detecting cgroup driver to use...
	I0920 19:04:43.665530  302869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:04:43.681288  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:04:43.695161  302869 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:04:43.695218  302869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:04:43.708772  302869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:04:43.722803  302869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:04:43.834054  302869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:04:43.966014  302869 docker.go:233] disabling docker service ...
	I0920 19:04:43.966102  302869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:04:43.982324  302869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:04:43.995351  302869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:04:44.135635  302869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:04:44.262661  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:04:44.277377  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:04:44.299889  302869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:04:44.299965  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.312434  302869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:04:44.312534  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.323052  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.333504  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.343704  302869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:04:44.354386  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.364308  302869 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.383581  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.395013  302869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:04:44.405227  302869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:04:44.405279  302869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:04:44.418685  302869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:04:44.431323  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:04:44.558582  302869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:04:44.644003  302869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:04:44.644091  302869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:04:44.649434  302869 start.go:563] Will wait 60s for crictl version
	I0920 19:04:44.649498  302869 ssh_runner.go:195] Run: which crictl
	I0920 19:04:44.653334  302869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:04:44.695896  302869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:04:44.696004  302869 ssh_runner.go:195] Run: crio --version
	I0920 19:04:44.726148  302869 ssh_runner.go:195] Run: crio --version
	I0920 19:04:44.757340  302869 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:04:43.388378  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Start
	I0920 19:04:43.388603  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring networks are active...
	I0920 19:04:43.389387  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring network default is active
	I0920 19:04:43.389863  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring network mk-default-k8s-diff-port-612312 is active
	I0920 19:04:43.390364  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Getting domain xml...
	I0920 19:04:43.391121  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Creating domain...
	I0920 19:04:44.718004  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting to get IP...
	I0920 19:04:44.718885  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.719317  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.719413  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:44.719288  304227 retry.go:31] will retry after 197.63251ms: waiting for machine to come up
	I0920 19:04:44.919026  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.919516  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.919547  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:44.919475  304227 retry.go:31] will retry after 305.409091ms: waiting for machine to come up
	I0920 19:04:45.227550  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.228191  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.228224  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:45.228147  304227 retry.go:31] will retry after 311.72219ms: waiting for machine to come up
	I0920 19:04:45.541945  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.542374  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.542403  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:45.542344  304227 retry.go:31] will retry after 547.369471ms: waiting for machine to come up
	I0920 19:04:46.091199  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.091731  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.091765  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:46.091693  304227 retry.go:31] will retry after 519.190971ms: waiting for machine to come up
	I0920 19:04:46.612175  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.612641  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.612672  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:46.612591  304227 retry.go:31] will retry after 715.908704ms: waiting for machine to come up
	I0920 19:04:47.330911  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:47.331350  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:47.331379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:47.331294  304227 retry.go:31] will retry after 898.358136ms: waiting for machine to come up
	I0920 19:04:44.759090  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:44.762331  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:44.762696  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:44.762728  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:44.762954  302869 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 19:04:44.767209  302869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:04:44.781327  302869 kubeadm.go:883] updating cluster {Name:embed-certs-339897 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:04:44.781465  302869 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:04:44.781512  302869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:04:44.817356  302869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:04:44.817422  302869 ssh_runner.go:195] Run: which lz4
	I0920 19:04:44.821534  302869 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:04:44.826169  302869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:04:44.826205  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 19:04:46.160290  302869 crio.go:462] duration metric: took 1.338795677s to copy over tarball
	I0920 19:04:46.160379  302869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:04:48.265535  302869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.105118482s)
	I0920 19:04:48.265580  302869 crio.go:469] duration metric: took 2.105250135s to extract the tarball
	I0920 19:04:48.265588  302869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:04:48.302529  302869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:04:48.346391  302869 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:04:48.346419  302869 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:04:48.346427  302869 kubeadm.go:934] updating node { 192.168.72.72 8443 v1.31.1 crio true true} ...
	I0920 19:04:48.346566  302869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-339897 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:04:48.346668  302869 ssh_runner.go:195] Run: crio config
	I0920 19:04:48.396798  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:04:48.396824  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:04:48.396834  302869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:04:48.396866  302869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.72 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-339897 NodeName:embed-certs-339897 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:04:48.397043  302869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-339897"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:04:48.397121  302869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:04:48.407031  302869 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:04:48.407118  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:04:48.416554  302869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 19:04:48.432540  302869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:04:48.448042  302869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0920 19:04:48.465193  302869 ssh_runner.go:195] Run: grep 192.168.72.72	control-plane.minikube.internal$ /etc/hosts
	I0920 19:04:48.469083  302869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:04:48.481123  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:04:48.609883  302869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:04:48.627512  302869 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897 for IP: 192.168.72.72
	I0920 19:04:48.627545  302869 certs.go:194] generating shared ca certs ...
	I0920 19:04:48.627571  302869 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:04:48.627784  302869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:04:48.627851  302869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:04:48.627866  302869 certs.go:256] generating profile certs ...
	I0920 19:04:48.628032  302869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/client.key
	I0920 19:04:48.628143  302869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.key.308547ed
	I0920 19:04:48.628206  302869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.key
	I0920 19:04:48.628375  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:04:48.628421  302869 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:04:48.628435  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:04:48.628470  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:04:48.628509  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:04:48.628542  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:04:48.628616  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:04:48.629569  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:04:48.656203  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:04:48.708322  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:04:48.737686  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:04:48.772198  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 19:04:48.812086  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:04:48.836038  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:04:48.859972  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:04:48.883881  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:04:48.908399  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:04:48.930787  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:04:48.954052  302869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:04:48.970257  302869 ssh_runner.go:195] Run: openssl version
	I0920 19:04:48.976072  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:04:48.986449  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.990765  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.990833  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.996437  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:04:49.007111  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:04:49.017548  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.022044  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.022108  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.027752  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:04:49.038538  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:04:49.049445  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.054018  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.054100  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.059842  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:04:49.070748  302869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:04:49.075195  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:04:49.081100  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:04:49.086844  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:04:49.092790  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:04:49.098664  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:04:49.104562  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:04:49.110818  302869 kubeadm.go:392] StartCluster: {Name:embed-certs-339897 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:04:49.110952  302869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:04:49.111003  302869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:04:49.157700  302869 cri.go:89] found id: ""
	I0920 19:04:49.157774  302869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:04:49.168314  302869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:04:49.168339  302869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:04:49.168385  302869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:04:49.178632  302869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:04:49.179681  302869 kubeconfig.go:125] found "embed-certs-339897" server: "https://192.168.72.72:8443"
	I0920 19:04:49.181624  302869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:04:49.192084  302869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.72
	I0920 19:04:49.192159  302869 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:04:49.192188  302869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:04:49.192265  302869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:04:49.229141  302869 cri.go:89] found id: ""
	I0920 19:04:49.229232  302869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:04:49.247628  302869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:04:49.258190  302869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:04:49.258211  302869 kubeadm.go:157] found existing configuration files:
	
	I0920 19:04:49.258270  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:04:49.267769  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:04:49.267837  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:04:49.277473  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:04:49.286639  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:04:49.286712  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:04:49.296295  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:04:49.305705  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:04:49.305787  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:04:49.315191  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:04:49.324206  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:04:49.324288  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:04:49.334065  302869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:04:49.344823  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:48.231405  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:48.231846  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:48.231872  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:48.231795  304227 retry.go:31] will retry after 1.105264539s: waiting for machine to come up
	I0920 19:04:49.338940  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:49.339413  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:49.339437  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:49.339366  304227 retry.go:31] will retry after 1.638536651s: waiting for machine to come up
	I0920 19:04:50.980320  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:50.980774  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:50.980805  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:50.980714  304227 retry.go:31] will retry after 2.064766522s: waiting for machine to come up
	I0920 19:04:49.450454  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.412643  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.629144  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.694547  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.756897  302869 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:04:50.757008  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:51.258120  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:51.758025  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.258040  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.757302  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.774867  302869 api_server.go:72] duration metric: took 2.017964832s to wait for apiserver process to appear ...
	I0920 19:04:52.774906  302869 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:04:52.774954  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.383214  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:04:55.383255  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:04:55.383272  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.406625  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:04:55.406660  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:04:55.775825  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.785126  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:04:55.785157  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:04:56.275864  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:56.284002  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:04:56.284032  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:04:56.775547  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:56.779999  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 200:
	ok
	I0920 19:04:56.786034  302869 api_server.go:141] control plane version: v1.31.1
	I0920 19:04:56.786066  302869 api_server.go:131] duration metric: took 4.011153019s to wait for apiserver health ...
	I0920 19:04:56.786076  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:04:56.786082  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:04:56.788195  302869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:04:53.047487  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:53.048005  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:53.048027  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:53.047958  304227 retry.go:31] will retry after 2.829648578s: waiting for machine to come up
	I0920 19:04:55.879069  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:55.879538  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:55.879562  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:55.879488  304227 retry.go:31] will retry after 3.029828813s: waiting for machine to come up
	I0920 19:04:56.789703  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:04:56.799605  302869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:04:56.816974  302869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:04:56.828470  302869 system_pods.go:59] 8 kube-system pods found
	I0920 19:04:56.828582  302869 system_pods.go:61] "coredns-7c65d6cfc9-xnfsk" [5e34a8b9-d748-484a-92ab-0d288ab5f35e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:04:56.828610  302869 system_pods.go:61] "etcd-embed-certs-339897" [1d0e8303-0ab9-418c-ba2d-f0ba33abad36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:04:56.828637  302869 system_pods.go:61] "kube-apiserver-embed-certs-339897" [35569778-54b1-456d-8822-5a53a5e336fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:04:56.828655  302869 system_pods.go:61] "kube-controller-manager-embed-certs-339897" [6b9db655-59a1-4975-b3c7-fcc29a912850] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:04:56.828677  302869 system_pods.go:61] "kube-proxy-xs4nd" [a32f4c96-ae6e-4402-89c5-0226a4412d17] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:04:56.828694  302869 system_pods.go:61] "kube-scheduler-embed-certs-339897" [81dd07df-2ba9-4f8e-bb16-263bd6496a0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:04:56.828716  302869 system_pods.go:61] "metrics-server-6867b74b74-qqhcw" [b720a331-05ef-4528-bd25-0c1e7ef66b16] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:04:56.828729  302869 system_pods.go:61] "storage-provisioner" [08674813-f61d-49e9-a714-5f38b95f058e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:04:56.828738  302869 system_pods.go:74] duration metric: took 11.732519ms to wait for pod list to return data ...
	I0920 19:04:56.828748  302869 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:04:56.835747  302869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:04:56.835786  302869 node_conditions.go:123] node cpu capacity is 2
	I0920 19:04:56.835799  302869 node_conditions.go:105] duration metric: took 7.044914ms to run NodePressure ...
	I0920 19:04:56.835822  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:57.221422  302869 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:04:57.225575  302869 kubeadm.go:739] kubelet initialised
	I0920 19:04:57.225601  302869 kubeadm.go:740] duration metric: took 4.150722ms waiting for restarted kubelet to initialise ...
	I0920 19:04:57.225610  302869 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:04:57.230469  302869 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace to be "Ready" ...
	I0920 19:04:59.237961  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:58.911412  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:58.911990  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:58.912020  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:58.911956  304227 retry.go:31] will retry after 3.428044067s: waiting for machine to come up
	I0920 19:05:02.343216  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.343633  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Found IP for machine: 192.168.50.230
	I0920 19:05:02.343668  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has current primary IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.343679  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Reserving static IP address...
	I0920 19:05:02.344038  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Reserved static IP address: 192.168.50.230
	I0920 19:05:02.344084  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-612312", mac: "52:54:00:fa:2b:63", ip: "192.168.50.230"} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.344097  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for SSH to be available...
	I0920 19:05:02.344123  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | skip adding static IP to network mk-default-k8s-diff-port-612312 - found existing host DHCP lease matching {name: "default-k8s-diff-port-612312", mac: "52:54:00:fa:2b:63", ip: "192.168.50.230"}
	I0920 19:05:02.344136  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Getting to WaitForSSH function...
	I0920 19:05:02.346591  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.346932  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.346957  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.347110  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Using SSH client type: external
	I0920 19:05:02.347157  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa (-rw-------)
	I0920 19:05:02.347194  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:02.347214  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | About to run SSH command:
	I0920 19:05:02.347227  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | exit 0
	I0920 19:05:02.474040  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:02.474475  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetConfigRaw
	I0920 19:05:02.475160  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:02.477963  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.478338  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.478361  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.478680  303063 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/config.json ...
	I0920 19:05:02.478923  303063 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:02.478949  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:02.479166  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.481380  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.481759  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.481797  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.481961  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.482149  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.482307  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.482458  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.482619  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.482883  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.482900  303063 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:02.586360  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:02.586395  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.586694  303063 buildroot.go:166] provisioning hostname "default-k8s-diff-port-612312"
	I0920 19:05:02.586720  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.586951  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.589692  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.590053  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.590080  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.590230  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.590420  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.590563  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.590722  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.590936  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.591112  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.591126  303063 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-612312 && echo "default-k8s-diff-port-612312" | sudo tee /etc/hostname
	I0920 19:05:02.707768  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-612312
	
	I0920 19:05:02.707799  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.710647  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.711035  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.711064  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.711234  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.711448  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.711622  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.711791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.711938  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.712098  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.712116  303063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-612312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-612312/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-612312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:02.828234  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:02.828274  303063 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:02.828314  303063 buildroot.go:174] setting up certificates
	I0920 19:05:02.828327  303063 provision.go:84] configureAuth start
	I0920 19:05:02.828340  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.828700  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:02.831997  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.832469  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.832503  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.832704  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.835280  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.835577  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.835608  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.835699  303063 provision.go:143] copyHostCerts
	I0920 19:05:02.835766  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:02.835787  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:02.835848  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:02.835947  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:02.835955  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:02.835975  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:02.836026  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:02.836033  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:02.836055  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:02.836103  303063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-612312 san=[127.0.0.1 192.168.50.230 default-k8s-diff-port-612312 localhost minikube]
	I0920 19:05:02.983437  303063 provision.go:177] copyRemoteCerts
	I0920 19:05:02.983510  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:02.983541  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.986435  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.986791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.986835  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.987110  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.987289  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.987438  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.987579  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.674961  303486 start.go:364] duration metric: took 3m34.601349843s to acquireMachinesLock for "old-k8s-version-425599"
	I0920 19:05:03.675039  303486 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:05:03.675048  303486 fix.go:54] fixHost starting: 
	I0920 19:05:03.675480  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:03.675541  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:03.694201  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I0920 19:05:03.694642  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:03.695198  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:05:03.695221  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:03.695609  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:03.695793  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:03.695935  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetState
	I0920 19:05:03.697838  303486 fix.go:112] recreateIfNeeded on old-k8s-version-425599: state=Stopped err=<nil>
	I0920 19:05:03.697885  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	W0920 19:05:03.698080  303486 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:05:03.700333  303486 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-425599" ...
	I0920 19:05:03.701947  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .Start
	I0920 19:05:03.702184  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring networks are active...
	I0920 19:05:03.703106  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network default is active
	I0920 19:05:03.703645  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network mk-old-k8s-version-425599 is active
	I0920 19:05:03.704152  303486 main.go:141] libmachine: (old-k8s-version-425599) Getting domain xml...
	I0920 19:05:03.704942  303486 main.go:141] libmachine: (old-k8s-version-425599) Creating domain...
	I0920 19:05:01.738488  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:03.238934  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:03.238968  302869 pod_ready.go:82] duration metric: took 6.008471722s for pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.238978  302869 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.746041  302869 pod_ready.go:93] pod "etcd-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:03.746069  302869 pod_ready.go:82] duration metric: took 507.084418ms for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.746078  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.072306  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 19:05:03.096078  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:03.122027  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:03.150314  303063 provision.go:87] duration metric: took 321.970593ms to configureAuth
	I0920 19:05:03.150345  303063 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:03.150557  303063 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:03.150650  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.153988  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.154472  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.154524  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.154631  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.154840  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.155194  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.155397  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.155741  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:03.155990  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:03.156011  303063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:03.417981  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:03.418020  303063 machine.go:96] duration metric: took 939.078754ms to provisionDockerMachine
	I0920 19:05:03.418038  303063 start.go:293] postStartSetup for "default-k8s-diff-port-612312" (driver="kvm2")
	I0920 19:05:03.418052  303063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:03.418083  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.418456  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:03.418496  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.421689  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.422245  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.422282  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.422539  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.422747  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.422991  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.423144  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.509122  303063 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:03.515233  303063 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:03.515263  303063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:03.515343  303063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:03.515441  303063 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:03.515561  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:03.529346  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:03.559267  303063 start.go:296] duration metric: took 141.209592ms for postStartSetup
	I0920 19:05:03.559320  303063 fix.go:56] duration metric: took 20.196510123s for fixHost
	I0920 19:05:03.559348  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.563599  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.564320  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.564354  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.564605  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.564917  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.565120  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.565379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.565588  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:03.565813  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:03.565827  303063 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:03.674803  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859103.651785276
	
	I0920 19:05:03.674833  303063 fix.go:216] guest clock: 1726859103.651785276
	I0920 19:05:03.674840  303063 fix.go:229] Guest: 2024-09-20 19:05:03.651785276 +0000 UTC Remote: 2024-09-20 19:05:03.559326363 +0000 UTC m=+280.560675514 (delta=92.458913ms)
	I0920 19:05:03.674862  303063 fix.go:200] guest clock delta is within tolerance: 92.458913ms
	I0920 19:05:03.674867  303063 start.go:83] releasing machines lock for "default-k8s-diff-port-612312", held for 20.312097182s
	I0920 19:05:03.674897  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.675183  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:03.677975  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.678374  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.678406  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.678552  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679080  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679255  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679380  303063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:03.679429  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.679442  303063 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:03.679472  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.682443  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.682733  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.682876  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.682902  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.683014  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.683081  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.683104  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.683222  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.683326  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.683440  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.683512  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.683634  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.683721  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.683753  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.766786  303063 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:03.806684  303063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:03.950032  303063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:03.957153  303063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:03.957230  303063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:03.976784  303063 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:03.976814  303063 start.go:495] detecting cgroup driver to use...
	I0920 19:05:03.976902  303063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:03.994391  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:04.009961  303063 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:04.010021  303063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:04.023827  303063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:04.038585  303063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:04.157489  303063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:04.320396  303063 docker.go:233] disabling docker service ...
	I0920 19:05:04.320477  303063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:04.334865  303063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:04.350776  303063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:04.469438  303063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:04.596055  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:04.610548  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:04.629128  303063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:05:04.629192  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.640211  303063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:04.640289  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.650877  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.661863  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.672695  303063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:04.684141  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.696358  303063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.714936  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.726155  303063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:04.737400  303063 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:04.737460  303063 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:04.752752  303063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:04.767664  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:04.892509  303063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:04.992361  303063 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:04.992465  303063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:04.997119  303063 start.go:563] Will wait 60s for crictl version
	I0920 19:05:04.997215  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:05:05.001132  303063 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:05.050835  303063 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:05.050955  303063 ssh_runner.go:195] Run: crio --version
	I0920 19:05:05.079870  303063 ssh_runner.go:195] Run: crio --version
	I0920 19:05:05.112325  303063 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:05:05.113600  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:05.116591  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:05.117037  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:05.117075  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:05.117334  303063 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:05.122086  303063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:05.135489  303063 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-612312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:05.135682  303063 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:05:05.135776  303063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:05.174026  303063 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:05:05.174090  303063 ssh_runner.go:195] Run: which lz4
	I0920 19:05:05.179003  303063 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:05:05.184119  303063 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:05:05.184168  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 19:05:06.479331  303063 crio.go:462] duration metric: took 1.300388015s to copy over tarball
	I0920 19:05:06.479434  303063 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:05:05.040094  303486 main.go:141] libmachine: (old-k8s-version-425599) Waiting to get IP...
	I0920 19:05:05.041198  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.041615  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.041711  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.041616  304380 retry.go:31] will retry after 264.073086ms: waiting for machine to come up
	I0920 19:05:05.307229  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.307761  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.307784  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.307713  304380 retry.go:31] will retry after 317.541552ms: waiting for machine to come up
	I0920 19:05:05.627262  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.627903  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.627929  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.627797  304380 retry.go:31] will retry after 432.236037ms: waiting for machine to come up
	I0920 19:05:06.062368  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:06.062842  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:06.062873  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:06.062804  304380 retry.go:31] will retry after 525.683807ms: waiting for machine to come up
	I0920 19:05:06.590915  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:06.591405  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:06.591434  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:06.591355  304380 retry.go:31] will retry after 542.00244ms: waiting for machine to come up
	I0920 19:05:07.135388  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:07.135944  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:07.135998  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:07.135908  304380 retry.go:31] will retry after 886.798885ms: waiting for machine to come up
	I0920 19:05:08.024147  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:08.024684  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:08.024713  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:08.024596  304380 retry.go:31] will retry after 826.869965ms: waiting for machine to come up
	I0920 19:05:08.853176  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:08.853793  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:08.853828  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:08.853736  304380 retry.go:31] will retry after 1.007422775s: waiting for machine to come up
	I0920 19:05:05.756992  302869 pod_ready.go:103] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:08.255312  302869 pod_ready.go:103] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:08.656490  303063 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1770136s)
	I0920 19:05:08.656529  303063 crio.go:469] duration metric: took 2.177156837s to extract the tarball
	I0920 19:05:08.656539  303063 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:05:08.693153  303063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:08.733444  303063 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:05:08.733473  303063 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:05:08.733484  303063 kubeadm.go:934] updating node { 192.168.50.230 8444 v1.31.1 crio true true} ...
	I0920 19:05:08.733624  303063 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-612312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:05:08.733710  303063 ssh_runner.go:195] Run: crio config
	I0920 19:05:08.777872  303063 cni.go:84] Creating CNI manager for ""
	I0920 19:05:08.777913  303063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:08.777927  303063 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:05:08.777957  303063 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.230 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-612312 NodeName:default-k8s-diff-port-612312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:05:08.778143  303063 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.230
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-612312"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:05:08.778220  303063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:05:08.788133  303063 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:05:08.788208  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:05:08.797461  303063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0920 19:05:08.814111  303063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:05:08.832188  303063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0920 19:05:08.849801  303063 ssh_runner.go:195] Run: grep 192.168.50.230	control-plane.minikube.internal$ /etc/hosts
	I0920 19:05:08.853809  303063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:08.865685  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:08.985881  303063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:09.002387  303063 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312 for IP: 192.168.50.230
	I0920 19:05:09.002417  303063 certs.go:194] generating shared ca certs ...
	I0920 19:05:09.002441  303063 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:09.002656  303063 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:05:09.002727  303063 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:05:09.002741  303063 certs.go:256] generating profile certs ...
	I0920 19:05:09.002859  303063 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/client.key
	I0920 19:05:09.002940  303063 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.key.637d18af
	I0920 19:05:09.002990  303063 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.key
	I0920 19:05:09.003207  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:05:09.003248  303063 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:05:09.003256  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:05:09.003278  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:05:09.003306  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:05:09.003328  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:05:09.003365  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:09.004030  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:05:09.037203  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:05:09.068858  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:05:09.095082  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:05:09.122167  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 19:05:09.147953  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:05:09.174251  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:05:09.202438  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:05:09.231354  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:05:09.256365  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:05:09.282589  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:05:09.308610  303063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:05:09.328798  303063 ssh_runner.go:195] Run: openssl version
	I0920 19:05:09.334685  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:05:09.345947  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.350772  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.350838  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.356595  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:05:09.367559  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:05:09.380638  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.385362  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.385429  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.391299  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:05:09.402065  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:05:09.412841  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.417074  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.417138  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.422761  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:05:09.433780  303063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:05:09.438734  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:05:09.444888  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:05:09.450715  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:05:09.456993  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:05:09.462716  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:05:09.468847  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:05:09.474680  303063 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-612312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:05:09.474780  303063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:05:09.474844  303063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:09.513886  303063 cri.go:89] found id: ""
	I0920 19:05:09.514006  303063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:09.524385  303063 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:05:09.524417  303063 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:05:09.524479  303063 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:05:09.534288  303063 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:05:09.535251  303063 kubeconfig.go:125] found "default-k8s-diff-port-612312" server: "https://192.168.50.230:8444"
	I0920 19:05:09.537293  303063 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:05:09.547753  303063 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.230
	I0920 19:05:09.547796  303063 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:05:09.547812  303063 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:05:09.547890  303063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:09.590656  303063 cri.go:89] found id: ""
	I0920 19:05:09.590743  303063 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:05:09.607426  303063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:05:09.617258  303063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:05:09.617280  303063 kubeadm.go:157] found existing configuration files:
	
	I0920 19:05:09.617344  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 19:05:09.626725  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:05:09.626813  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:05:09.636421  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 19:05:09.645711  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:05:09.645780  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:05:09.655351  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 19:05:09.664771  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:05:09.664833  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:05:09.674556  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 19:05:09.683677  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:05:09.683821  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:05:09.695159  303063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:05:09.704995  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:09.821398  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.642045  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.870266  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.935191  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:11.015669  303063 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:05:11.015787  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:11.516670  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:12.016486  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:12.516070  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:13.016012  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:13.031718  303063 api_server.go:72] duration metric: took 2.016048489s to wait for apiserver process to appear ...
	I0920 19:05:13.031752  303063 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:05:13.031781  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:13.032414  303063 api_server.go:269] stopped: https://192.168.50.230:8444/healthz: Get "https://192.168.50.230:8444/healthz": dial tcp 192.168.50.230:8444: connect: connection refused
	I0920 19:05:09.863227  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:09.863693  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:09.863721  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:09.863640  304380 retry.go:31] will retry after 1.556199895s: waiting for machine to come up
	I0920 19:05:11.422510  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:11.423244  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:11.423271  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:11.423179  304380 retry.go:31] will retry after 1.670177778s: waiting for machine to come up
	I0920 19:05:13.095982  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:13.096600  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:13.096626  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:13.096545  304380 retry.go:31] will retry after 2.71780554s: waiting for machine to come up
	I0920 19:05:10.256325  302869 pod_ready.go:93] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.256352  302869 pod_ready.go:82] duration metric: took 6.510267221s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.256361  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.263229  302869 pod_ready.go:93] pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.263254  302869 pod_ready.go:82] duration metric: took 6.886052ms for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.263264  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xs4nd" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.270014  302869 pod_ready.go:93] pod "kube-proxy-xs4nd" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.270040  302869 pod_ready.go:82] duration metric: took 6.769102ms for pod "kube-proxy-xs4nd" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.270049  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.277232  302869 pod_ready.go:93] pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.277262  302869 pod_ready.go:82] duration metric: took 7.203732ms for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.277275  302869 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:12.284083  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:14.284983  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:13.532830  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:15.579530  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:05:15.579567  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:05:15.579584  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:15.596526  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:05:15.596570  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:05:16.032011  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:16.039310  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:05:16.039346  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:05:16.531881  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:16.536703  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:05:16.536736  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:05:17.032322  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:17.036979  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 200:
	ok
	I0920 19:05:17.043667  303063 api_server.go:141] control plane version: v1.31.1
	I0920 19:05:17.043701  303063 api_server.go:131] duration metric: took 4.011936277s to wait for apiserver health ...
	I0920 19:05:17.043710  303063 cni.go:84] Creating CNI manager for ""
	I0920 19:05:17.043716  303063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:17.045376  303063 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:05:17.046579  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:05:17.056771  303063 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:05:17.076571  303063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:05:17.085546  303063 system_pods.go:59] 8 kube-system pods found
	I0920 19:05:17.085584  303063 system_pods.go:61] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:05:17.085591  303063 system_pods.go:61] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:05:17.085597  303063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:05:17.085608  303063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:05:17.085615  303063 system_pods.go:61] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:05:17.085624  303063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:05:17.085631  303063 system_pods.go:61] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:05:17.085638  303063 system_pods.go:61] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:05:17.085646  303063 system_pods.go:74] duration metric: took 9.051189ms to wait for pod list to return data ...
	I0920 19:05:17.085657  303063 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:05:17.089161  303063 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:05:17.089190  303063 node_conditions.go:123] node cpu capacity is 2
	I0920 19:05:17.089201  303063 node_conditions.go:105] duration metric: took 3.534622ms to run NodePressure ...
	I0920 19:05:17.089218  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:17.442957  303063 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:05:17.447222  303063 kubeadm.go:739] kubelet initialised
	I0920 19:05:17.447247  303063 kubeadm.go:740] duration metric: took 4.255349ms waiting for restarted kubelet to initialise ...
	I0920 19:05:17.447255  303063 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:17.451839  303063 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.457216  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.457240  303063 pod_ready.go:82] duration metric: took 5.361636ms for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.457250  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.457256  303063 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.462245  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.462273  303063 pod_ready.go:82] duration metric: took 5.009342ms for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.462313  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.462326  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.468060  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.468087  303063 pod_ready.go:82] duration metric: took 5.75409ms for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.468099  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.468105  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.479703  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.479727  303063 pod_ready.go:82] duration metric: took 11.614638ms for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.479739  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.479750  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.879555  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-proxy-zp8l5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.879582  303063 pod_ready.go:82] duration metric: took 399.824208ms for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.879592  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-proxy-zp8l5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.879599  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:18.281551  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.281585  303063 pod_ready.go:82] duration metric: took 401.976884ms for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:18.281601  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.281611  303063 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:18.680674  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.680711  303063 pod_ready.go:82] duration metric: took 399.091849ms for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:18.680723  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.680730  303063 pod_ready.go:39] duration metric: took 1.233465539s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:18.680747  303063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:05:18.692948  303063 ops.go:34] apiserver oom_adj: -16
	I0920 19:05:18.692970  303063 kubeadm.go:597] duration metric: took 9.168545987s to restartPrimaryControlPlane
	I0920 19:05:18.692981  303063 kubeadm.go:394] duration metric: took 9.218309896s to StartCluster
	I0920 19:05:18.692999  303063 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:18.693078  303063 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:05:18.694921  303063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:18.695293  303063 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:05:18.696157  303063 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:18.696187  303063 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:05:18.696357  303063 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696377  303063 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.696387  303063 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:05:18.696419  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.696449  303063 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696495  303063 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-612312"
	I0920 19:05:18.696506  303063 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696588  303063 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.696610  303063 addons.go:243] addon metrics-server should already be in state true
	I0920 19:05:18.696709  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.697239  303063 out.go:177] * Verifying Kubernetes components...
	I0920 19:05:18.697334  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697386  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.697409  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697409  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697442  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.697531  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.698927  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:18.713346  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I0920 19:05:18.713346  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35127
	I0920 19:05:18.713967  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.714000  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.714472  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.714491  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.714572  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.714588  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.714961  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.714965  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.715163  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.715842  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.715893  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.717732  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I0920 19:05:18.718289  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.718553  303063 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.718575  303063 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:05:18.718604  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.718827  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.718852  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.718926  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.718956  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.719243  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.719782  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.719826  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.733219  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0920 19:05:18.733789  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.734403  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.734422  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.734463  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41091
	I0920 19:05:18.734905  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.734993  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.735207  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.735363  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.735394  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.735703  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.736264  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.736321  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.737489  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.739977  303063 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:05:18.740477  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39547
	I0920 19:05:18.741217  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.741752  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:05:18.741770  303063 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:05:18.741791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.741854  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.741875  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.742351  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.742547  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.744800  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.746006  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.746416  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.746442  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.746695  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.746961  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.746974  303063 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:15.815519  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:15.816035  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:15.816065  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:15.815974  304380 retry.go:31] will retry after 2.62788631s: waiting for machine to come up
	I0920 19:05:18.446768  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:18.447219  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:18.447240  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:18.447166  304380 retry.go:31] will retry after 4.025841071s: waiting for machine to come up
	I0920 19:05:16.784503  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:18.785829  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:18.747159  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.747332  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.748881  303063 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:05:18.748901  303063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:05:18.748932  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.752335  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.752787  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.752812  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.753180  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.753340  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.753491  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.753628  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.755106  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0920 19:05:18.755543  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.756159  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.756182  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.756521  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.756710  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.758400  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.758674  303063 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:05:18.758690  303063 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:05:18.758707  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.762208  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.762748  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.762776  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.762950  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.763235  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.763518  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.763678  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.900876  303063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:18.919923  303063 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-612312" to be "Ready" ...
	I0920 19:05:18.993779  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:05:18.993814  303063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:05:19.001703  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:05:19.019424  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:05:19.054174  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:05:19.054202  303063 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:05:19.123651  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:05:19.123682  303063 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:05:19.186745  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:05:19.369866  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.369898  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.370210  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.370229  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:19.370246  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.370270  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.370552  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.370593  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:19.370625  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:19.380105  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.380140  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.380456  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.380472  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.145346  303063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.12587258s)
	I0920 19:05:20.145412  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.145427  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.145769  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:20.145834  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.145846  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.145866  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.145877  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.146126  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.146144  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152067  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.152084  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.152361  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.152379  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152388  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.152395  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.152625  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.152662  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:20.152711  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152729  303063 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-612312"
	I0920 19:05:20.154940  303063 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 19:05:20.156326  303063 addons.go:510] duration metric: took 1.460148296s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 19:05:20.923687  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:22.924271  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:23.791151  302538 start.go:364] duration metric: took 54.811585482s to acquireMachinesLock for "no-preload-037711"
	I0920 19:05:23.791208  302538 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:05:23.791219  302538 fix.go:54] fixHost starting: 
	I0920 19:05:23.791657  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:23.791696  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:23.809350  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0920 19:05:23.809873  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:23.810520  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:05:23.810555  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:23.810893  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:23.811118  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:23.811286  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:05:23.812885  302538 fix.go:112] recreateIfNeeded on no-preload-037711: state=Stopped err=<nil>
	I0920 19:05:23.812914  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	W0920 19:05:23.813135  302538 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:05:23.815287  302538 out.go:177] * Restarting existing kvm2 VM for "no-preload-037711" ...
	I0920 19:05:22.477850  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.478419  303486 main.go:141] libmachine: (old-k8s-version-425599) Found IP for machine: 192.168.39.53
	I0920 19:05:22.478454  303486 main.go:141] libmachine: (old-k8s-version-425599) Reserving static IP address...
	I0920 19:05:22.478473  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has current primary IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.478983  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "old-k8s-version-425599", mac: "52:54:00:d2:a5:70", ip: "192.168.39.53"} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.479021  303486 main.go:141] libmachine: (old-k8s-version-425599) Reserved static IP address: 192.168.39.53
	I0920 19:05:22.479040  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | skip adding static IP to network mk-old-k8s-version-425599 - found existing host DHCP lease matching {name: "old-k8s-version-425599", mac: "52:54:00:d2:a5:70", ip: "192.168.39.53"}
	I0920 19:05:22.479055  303486 main.go:141] libmachine: (old-k8s-version-425599) Waiting for SSH to be available...
	I0920 19:05:22.479067  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Getting to WaitForSSH function...
	I0920 19:05:22.481118  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.481359  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.481382  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.481556  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH client type: external
	I0920 19:05:22.481570  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa (-rw-------)
	I0920 19:05:22.481600  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:22.481612  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | About to run SSH command:
	I0920 19:05:22.481627  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | exit 0
	I0920 19:05:22.606383  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:22.606783  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetConfigRaw
	I0920 19:05:22.607408  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:22.610155  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.610474  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.610506  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.610784  303486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 19:05:22.611075  303486 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:22.611103  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:22.611332  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.613838  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.614250  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.614283  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.614395  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.614609  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.614776  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.614950  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.615136  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.615331  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.615344  303486 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:22.718330  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:22.718363  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.718651  303486 buildroot.go:166] provisioning hostname "old-k8s-version-425599"
	I0920 19:05:22.718697  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.718913  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.722027  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.722334  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.722370  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.722559  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.722738  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.722909  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.723086  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.723261  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.723473  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.723491  303486 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-425599 && echo "old-k8s-version-425599" | sudo tee /etc/hostname
	I0920 19:05:22.841563  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-425599
	
	I0920 19:05:22.841592  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.844327  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.844716  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.844748  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.844970  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.845154  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.845306  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.845413  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.845570  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.845793  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.845818  303486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-425599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-425599/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-425599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:22.959542  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:22.959572  303486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:22.959615  303486 buildroot.go:174] setting up certificates
	I0920 19:05:22.959625  303486 provision.go:84] configureAuth start
	I0920 19:05:22.959635  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.959972  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:22.962506  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.962845  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.962883  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.963020  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.965352  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.965734  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.965755  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.965936  303486 provision.go:143] copyHostCerts
	I0920 19:05:22.965999  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:22.966018  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:22.966073  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:22.966165  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:22.966173  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:22.966193  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:22.966250  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:22.966257  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:22.966274  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:22.966368  303486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-425599 san=[127.0.0.1 192.168.39.53 localhost minikube old-k8s-version-425599]
	I0920 19:05:23.156245  303486 provision.go:177] copyRemoteCerts
	I0920 19:05:23.156322  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:23.156356  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.159694  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.160062  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.160105  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.160283  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.160467  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.160633  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.160755  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.244439  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:23.271796  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 19:05:23.298124  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:23.323466  303486 provision.go:87] duration metric: took 363.82725ms to configureAuth
	I0920 19:05:23.323496  303486 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:23.323711  303486 config.go:182] Loaded profile config "old-k8s-version-425599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:05:23.323805  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.326985  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.327336  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.327363  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.327573  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.327788  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.328003  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.328161  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.328315  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:23.328492  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:23.328506  303486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:23.559721  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:23.559755  303486 machine.go:96] duration metric: took 948.663189ms to provisionDockerMachine
	I0920 19:05:23.559770  303486 start.go:293] postStartSetup for "old-k8s-version-425599" (driver="kvm2")
	I0920 19:05:23.559781  303486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:23.559812  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.560186  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:23.560225  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.563146  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.563462  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.563491  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.563786  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.564018  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.564214  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.564365  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.645013  303486 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:23.649198  303486 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:23.649230  303486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:23.649300  303486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:23.649416  303486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:23.649544  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:23.659351  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:23.683405  303486 start.go:296] duration metric: took 123.617289ms for postStartSetup
	I0920 19:05:23.683466  303486 fix.go:56] duration metric: took 20.008417985s for fixHost
	I0920 19:05:23.683495  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.686540  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.686962  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.686988  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.687209  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.687445  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.687624  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.687803  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.688001  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:23.688188  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:23.688206  303486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:23.790992  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859123.767729644
	
	I0920 19:05:23.791024  303486 fix.go:216] guest clock: 1726859123.767729644
	I0920 19:05:23.791035  303486 fix.go:229] Guest: 2024-09-20 19:05:23.767729644 +0000 UTC Remote: 2024-09-20 19:05:23.683472425 +0000 UTC m=+234.770765310 (delta=84.257219ms)
	I0920 19:05:23.791061  303486 fix.go:200] guest clock delta is within tolerance: 84.257219ms
	I0920 19:05:23.791068  303486 start.go:83] releasing machines lock for "old-k8s-version-425599", held for 20.116056408s
	I0920 19:05:23.791101  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.791432  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:23.794540  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.795015  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.795048  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.795226  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.795779  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.795992  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.796129  303486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:23.796180  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.796241  303486 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:23.796265  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.799032  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799374  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.799399  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799418  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799540  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.799743  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.799874  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.799890  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.799906  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.800084  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.800077  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.800198  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.800365  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.800514  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.924885  303486 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:23.932642  303486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:21.284671  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:23.284813  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:24.083860  303486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:24.090360  303486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:24.090444  303486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:24.112281  303486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:24.112310  303486 start.go:495] detecting cgroup driver to use...
	I0920 19:05:24.112383  303486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:24.136600  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:24.154552  303486 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:24.154631  303486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:24.170600  303486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:24.186071  303486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:24.319752  303486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:24.498299  303486 docker.go:233] disabling docker service ...
	I0920 19:05:24.498385  303486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:24.515762  303486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:24.533482  303486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:24.687481  303486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:24.820191  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:24.835255  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:24.856179  303486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 19:05:24.856253  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.868991  303486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:24.869080  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.884074  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.898732  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.911016  303486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:24.922757  303486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:24.937719  303486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:24.937828  303486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:24.955496  303486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:24.966347  303486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:25.114758  303486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:25.226807  303486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:25.226984  303486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:25.234576  303486 start.go:563] Will wait 60s for crictl version
	I0920 19:05:25.234664  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:25.238739  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:25.282242  303486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:25.282344  303486 ssh_runner.go:195] Run: crio --version
	I0920 19:05:25.317733  303486 ssh_runner.go:195] Run: crio --version
	I0920 19:05:25.353767  303486 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 19:05:23.816707  302538 main.go:141] libmachine: (no-preload-037711) Calling .Start
	I0920 19:05:23.817003  302538 main.go:141] libmachine: (no-preload-037711) Ensuring networks are active...
	I0920 19:05:23.817953  302538 main.go:141] libmachine: (no-preload-037711) Ensuring network default is active
	I0920 19:05:23.818345  302538 main.go:141] libmachine: (no-preload-037711) Ensuring network mk-no-preload-037711 is active
	I0920 19:05:23.818824  302538 main.go:141] libmachine: (no-preload-037711) Getting domain xml...
	I0920 19:05:23.819705  302538 main.go:141] libmachine: (no-preload-037711) Creating domain...
	I0920 19:05:25.216298  302538 main.go:141] libmachine: (no-preload-037711) Waiting to get IP...
	I0920 19:05:25.217452  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.218073  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.218138  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.218047  304582 retry.go:31] will retry after 256.299732ms: waiting for machine to come up
	I0920 19:05:25.475745  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.476451  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.476485  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.476388  304582 retry.go:31] will retry after 298.732749ms: waiting for machine to come up
	I0920 19:05:25.777093  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.777731  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.777755  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.777701  304582 retry.go:31] will retry after 360.011383ms: waiting for machine to come up
	I0920 19:05:26.139480  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:26.140100  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:26.140132  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:26.140049  304582 retry.go:31] will retry after 593.756705ms: waiting for machine to come up
	I0920 19:05:24.924455  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:26.425132  303063 node_ready.go:49] node "default-k8s-diff-port-612312" has status "Ready":"True"
	I0920 19:05:26.425165  303063 node_ready.go:38] duration metric: took 7.505210484s for node "default-k8s-diff-port-612312" to be "Ready" ...
	I0920 19:05:26.425181  303063 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:26.433394  303063 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:26.440462  303063 pod_ready.go:93] pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:26.440497  303063 pod_ready.go:82] duration metric: took 7.072952ms for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:26.440513  303063 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:25.354959  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:25.358179  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:25.358467  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:25.358495  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:25.358739  303486 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:25.362714  303486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:25.375880  303486 kubeadm.go:883] updating cluster {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:25.376024  303486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:05:25.376074  303486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:25.420224  303486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:05:25.420307  303486 ssh_runner.go:195] Run: which lz4
	I0920 19:05:25.424775  303486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:05:25.430102  303486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:05:25.430151  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 19:05:27.014068  303486 crio.go:462] duration metric: took 1.589333502s to copy over tarball
	I0920 19:05:27.014160  303486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:05:25.786282  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:27.788058  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:26.735924  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:26.736558  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:26.736582  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:26.736458  304582 retry.go:31] will retry after 712.118443ms: waiting for machine to come up
	I0920 19:05:27.450059  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:27.450696  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:27.450719  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:27.450592  304582 retry.go:31] will retry after 588.649809ms: waiting for machine to come up
	I0920 19:05:28.041216  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:28.041760  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:28.041791  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:28.041691  304582 retry.go:31] will retry after 869.42079ms: waiting for machine to come up
	I0920 19:05:28.912809  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:28.913240  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:28.913265  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:28.913174  304582 retry.go:31] will retry after 1.410011475s: waiting for machine to come up
	I0920 19:05:30.324367  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:30.324952  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:30.324980  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:30.324875  304582 retry.go:31] will retry after 1.398358739s: waiting for machine to come up
	I0920 19:05:28.454512  303063 pod_ready.go:103] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:31.546557  303063 pod_ready.go:103] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:32.072690  303063 pod_ready.go:93] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.072719  303063 pod_ready.go:82] duration metric: took 5.632196538s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.072734  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.081029  303063 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.081062  303063 pod_ready.go:82] duration metric: took 8.319382ms for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.081076  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.087314  303063 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.087338  303063 pod_ready.go:82] duration metric: took 6.253184ms for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.087351  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.093286  303063 pod_ready.go:93] pod "kube-proxy-zp8l5" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.093313  303063 pod_ready.go:82] duration metric: took 5.953425ms for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.093326  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.098529  303063 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.098553  303063 pod_ready.go:82] duration metric: took 5.218413ms for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.098565  303063 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:30.096727  303486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.082523066s)
	I0920 19:05:30.096778  303486 crio.go:469] duration metric: took 3.082671461s to extract the tarball
	I0920 19:05:30.096789  303486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:05:30.148059  303486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:30.184547  303486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:05:30.184578  303486 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 19:05:30.184672  303486 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:30.184686  303486 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.184711  303486 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.184730  303486 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.184732  303486 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.184686  303486 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 19:05:30.184693  303486 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.184792  303486 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.186558  303486 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:30.186609  303486 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 19:05:30.186607  303486 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.186616  303486 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.186688  303486 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.186698  303486 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.186701  303486 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.186565  303486 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.425283  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 19:05:30.469378  303486 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 19:05:30.469448  303486 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 19:05:30.469514  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.475453  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.493250  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.505003  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.513203  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.514365  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.521729  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.533265  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.580710  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.613984  303486 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 19:05:30.614033  303486 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.614085  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.653094  303486 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 19:05:30.653150  303486 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.653205  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675697  303486 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 19:05:30.675730  303486 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 19:05:30.675752  303486 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.675762  303486 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.675805  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675820  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675805  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.709199  303486 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 19:05:30.709261  303486 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.709310  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.720146  303486 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 19:05:30.720198  303486 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.720233  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.720313  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.720241  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.720374  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.720247  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.737444  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.737487  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 19:05:30.843272  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.843362  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.843366  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.860414  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.860462  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.860430  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.954641  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.982227  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.982263  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:31.041996  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:31.042032  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:31.042650  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:31.042722  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:31.070786  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 19:05:31.120407  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 19:05:31.135751  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 19:05:31.163591  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 19:05:31.164483  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 19:05:31.164587  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 19:05:31.345957  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:31.486337  303486 cache_images.go:92] duration metric: took 1.301737533s to LoadCachedImages
	W0920 19:05:31.486434  303486 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 19:05:31.486452  303486 kubeadm.go:934] updating node { 192.168.39.53 8443 v1.20.0 crio true true} ...
	I0920 19:05:31.486576  303486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-425599 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:05:31.486661  303486 ssh_runner.go:195] Run: crio config
	I0920 19:05:31.544181  303486 cni.go:84] Creating CNI manager for ""
	I0920 19:05:31.544215  303486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:31.544229  303486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:05:31.544257  303486 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-425599 NodeName:old-k8s-version-425599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 19:05:31.544465  303486 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-425599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:05:31.544556  303486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 19:05:31.559445  303486 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:05:31.559542  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:05:31.570446  303486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0920 19:05:31.588741  303486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:05:31.606454  303486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0920 19:05:31.624483  303486 ssh_runner.go:195] Run: grep 192.168.39.53	control-plane.minikube.internal$ /etc/hosts
	I0920 19:05:31.628285  303486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:31.641039  303486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:31.771690  303486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:31.789746  303486 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599 for IP: 192.168.39.53
	I0920 19:05:31.789775  303486 certs.go:194] generating shared ca certs ...
	I0920 19:05:31.789806  303486 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:31.790074  303486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:05:31.790150  303486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:05:31.790165  303486 certs.go:256] generating profile certs ...
	I0920 19:05:31.798117  303486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/client.key
	I0920 19:05:31.798270  303486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key.e78cb154
	I0920 19:05:31.798333  303486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key
	I0920 19:05:31.798499  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:05:31.798543  303486 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:05:31.798557  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:05:31.798608  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:05:31.798659  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:05:31.798692  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:05:31.798748  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:31.799624  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:05:31.843298  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:05:31.877299  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:05:31.909777  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:05:31.947787  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 19:05:31.991175  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:05:32.019393  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:05:32.048475  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:05:32.084354  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:05:32.112161  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:05:32.138991  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:05:32.167653  303486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:05:32.185485  303486 ssh_runner.go:195] Run: openssl version
	I0920 19:05:32.192030  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:05:32.203761  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.209550  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.209650  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.216277  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:05:32.228192  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:05:32.239984  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.244782  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.244848  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.250865  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:05:32.262035  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:05:32.273790  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.279335  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.279414  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.286501  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:05:32.298052  303486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:05:32.303064  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:05:32.309973  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:05:32.316704  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:05:32.323166  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:05:32.330126  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:05:32.336554  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:05:32.343303  303486 kubeadm.go:392] StartCluster: {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:05:32.343413  303486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:05:32.343473  303486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:32.387562  303486 cri.go:89] found id: ""
	I0920 19:05:32.387653  303486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:32.398143  303486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:05:32.398167  303486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:05:32.398222  303486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:05:32.407776  303486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:05:32.409205  303486 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-425599" does not appear in /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:05:32.410267  303486 kubeconfig.go:62] /home/jenkins/minikube-integration/19679-237658/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-425599" cluster setting kubeconfig missing "old-k8s-version-425599" context setting]
	I0920 19:05:32.411776  303486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:32.457074  303486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:05:32.468055  303486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.53
	I0920 19:05:32.468113  303486 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:05:32.468132  303486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:05:32.468211  303486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:32.505151  303486 cri.go:89] found id: ""
	I0920 19:05:32.505241  303486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:05:32.521391  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:05:32.531705  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:05:32.531728  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:05:32.531774  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:05:32.541137  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:05:32.541219  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:05:32.550684  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:05:32.560262  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:05:32.560352  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:05:32.569735  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:05:32.579126  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:05:32.579199  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:05:32.589508  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:05:32.600985  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:05:32.601100  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:05:32.611511  303486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:05:32.622346  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:32.755562  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:33.793472  303486 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.037864747s)
	I0920 19:05:33.793513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:30.283826  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:32.285077  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:31.725721  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:31.726171  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:31.726198  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:31.726127  304582 retry.go:31] will retry after 2.32427136s: waiting for machine to come up
	I0920 19:05:34.052412  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:34.053005  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:34.053043  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:34.052923  304582 retry.go:31] will retry after 2.159036217s: waiting for machine to come up
	I0920 19:05:36.215059  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:36.215561  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:36.215585  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:36.215501  304582 retry.go:31] will retry after 3.424610182s: waiting for machine to come up
	I0920 19:05:34.105780  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:36.106491  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:34.021260  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:34.142176  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:34.235507  303486 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:05:34.235618  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:34.736586  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:35.236065  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:35.735783  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:36.236406  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:36.736243  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:37.235994  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:37.736168  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:38.236559  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:38.736139  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:34.784743  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:37.282598  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.284890  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.642163  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:39.642600  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:39.642642  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:39.642541  304582 retry.go:31] will retry after 3.073679854s: waiting for machine to come up
	I0920 19:05:38.116192  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:40.605958  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.236010  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:39.735723  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:40.236003  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:40.735741  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.235689  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.736411  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:42.236028  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:42.735814  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:43.236391  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:43.736174  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.783707  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:43.784197  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:42.719195  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.719748  302538 main.go:141] libmachine: (no-preload-037711) Found IP for machine: 192.168.61.136
	I0920 19:05:42.719775  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has current primary IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.719780  302538 main.go:141] libmachine: (no-preload-037711) Reserving static IP address...
	I0920 19:05:42.720201  302538 main.go:141] libmachine: (no-preload-037711) Reserved static IP address: 192.168.61.136
	I0920 19:05:42.720220  302538 main.go:141] libmachine: (no-preload-037711) Waiting for SSH to be available...
	I0920 19:05:42.720239  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "no-preload-037711", mac: "52:54:00:b0:ac:14", ip: "192.168.61.136"} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.720268  302538 main.go:141] libmachine: (no-preload-037711) DBG | skip adding static IP to network mk-no-preload-037711 - found existing host DHCP lease matching {name: "no-preload-037711", mac: "52:54:00:b0:ac:14", ip: "192.168.61.136"}
	I0920 19:05:42.720280  302538 main.go:141] libmachine: (no-preload-037711) DBG | Getting to WaitForSSH function...
	I0920 19:05:42.722402  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.722661  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.722686  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.722864  302538 main.go:141] libmachine: (no-preload-037711) DBG | Using SSH client type: external
	I0920 19:05:42.722877  302538 main.go:141] libmachine: (no-preload-037711) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa (-rw-------)
	I0920 19:05:42.722939  302538 main.go:141] libmachine: (no-preload-037711) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:42.722962  302538 main.go:141] libmachine: (no-preload-037711) DBG | About to run SSH command:
	I0920 19:05:42.722979  302538 main.go:141] libmachine: (no-preload-037711) DBG | exit 0
	I0920 19:05:42.850057  302538 main.go:141] libmachine: (no-preload-037711) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:42.850451  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetConfigRaw
	I0920 19:05:42.851176  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:42.853807  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.854268  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.854290  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.854558  302538 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/config.json ...
	I0920 19:05:42.854764  302538 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:42.854782  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:42.854999  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:42.857347  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.857683  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.857712  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.857892  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:42.858073  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.858242  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.858385  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:42.858569  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:42.858755  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:42.858766  302538 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:42.962098  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:42.962137  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:42.962455  302538 buildroot.go:166] provisioning hostname "no-preload-037711"
	I0920 19:05:42.962488  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:42.962696  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:42.965410  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.965793  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.965822  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.965954  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:42.966128  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.966285  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.966442  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:42.966650  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:42.966822  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:42.966847  302538 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-037711 && echo "no-preload-037711" | sudo tee /etc/hostname
	I0920 19:05:43.089291  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-037711
	
	I0920 19:05:43.089338  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.092213  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.092658  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.092689  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.092828  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.093031  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.093188  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.093305  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.093478  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.093692  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.093719  302538 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-037711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-037711/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-037711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:43.210625  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:43.210660  302538 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:43.210720  302538 buildroot.go:174] setting up certificates
	I0920 19:05:43.210740  302538 provision.go:84] configureAuth start
	I0920 19:05:43.210758  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:43.211093  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:43.213829  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.214346  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.214379  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.214542  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.216979  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.217294  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.217319  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.217461  302538 provision.go:143] copyHostCerts
	I0920 19:05:43.217526  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:43.217546  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:43.217610  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:43.217708  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:43.217720  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:43.217750  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:43.217885  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:43.217899  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:43.217947  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:43.218008  302538 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.no-preload-037711 san=[127.0.0.1 192.168.61.136 localhost minikube no-preload-037711]
	I0920 19:05:43.395507  302538 provision.go:177] copyRemoteCerts
	I0920 19:05:43.395576  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:43.395607  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.398288  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.398663  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.398694  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.398899  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.399087  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.399205  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.399324  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:43.488543  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 19:05:43.514793  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:43.537520  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:43.561983  302538 provision.go:87] duration metric: took 351.22541ms to configureAuth
	I0920 19:05:43.562021  302538 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:43.562213  302538 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:43.562292  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.565776  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.566235  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.566270  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.566486  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.566706  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.566895  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.567043  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.567251  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.567439  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.567454  302538 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:43.797110  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:43.797142  302538 machine.go:96] duration metric: took 942.364782ms to provisionDockerMachine
	I0920 19:05:43.797157  302538 start.go:293] postStartSetup for "no-preload-037711" (driver="kvm2")
	I0920 19:05:43.797171  302538 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:43.797193  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:43.797516  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:43.797546  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.800148  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.800532  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.800559  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.800794  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.800993  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.801158  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.801255  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:43.885788  302538 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:43.890070  302538 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:43.890108  302538 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:43.890198  302538 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:43.890293  302538 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:43.890405  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:43.899679  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:43.924928  302538 start.go:296] duration metric: took 127.752462ms for postStartSetup
	I0920 19:05:43.924973  302538 fix.go:56] duration metric: took 20.133755115s for fixHost
	I0920 19:05:43.924996  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.927678  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.928059  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.928099  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.928277  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.928517  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.928685  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.928815  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.928979  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.929190  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.929204  302538 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:44.042745  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859144.016675004
	
	I0920 19:05:44.042769  302538 fix.go:216] guest clock: 1726859144.016675004
	I0920 19:05:44.042776  302538 fix.go:229] Guest: 2024-09-20 19:05:44.016675004 +0000 UTC Remote: 2024-09-20 19:05:43.924977449 +0000 UTC m=+357.534412233 (delta=91.697555ms)
	I0920 19:05:44.042804  302538 fix.go:200] guest clock delta is within tolerance: 91.697555ms
	I0920 19:05:44.042819  302538 start.go:83] releasing machines lock for "no-preload-037711", held for 20.251627041s
	I0920 19:05:44.042842  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.043134  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:44.046071  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.046412  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.046440  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.046613  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047113  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047278  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047366  302538 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:44.047428  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:44.047520  302538 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:44.047548  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:44.050275  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050358  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050849  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.050872  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.050892  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050915  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.051095  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:44.051259  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:44.051259  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:44.051496  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:44.051637  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:44.051655  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:44.051789  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:44.051953  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:44.134420  302538 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:44.175303  302538 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:44.319129  302538 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:44.325894  302538 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:44.325975  302538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:44.341779  302538 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:44.341809  302538 start.go:495] detecting cgroup driver to use...
	I0920 19:05:44.341899  302538 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:44.358211  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:44.373240  302538 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:44.373327  302538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:44.387429  302538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:44.401684  302538 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:44.521292  302538 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:44.668050  302538 docker.go:233] disabling docker service ...
	I0920 19:05:44.668124  302538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:44.683196  302538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:44.696604  302538 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:44.843581  302538 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:44.959377  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:44.973472  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:44.991282  302538 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:05:44.991344  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.001696  302538 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:45.001776  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.012684  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.023288  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.034330  302538 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:45.045773  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.056332  302538 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.074730  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.085656  302538 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:45.096371  302538 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:45.096447  302538 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:45.112094  302538 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:45.123050  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:45.236136  302538 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:45.325978  302538 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:45.326065  302538 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:45.330452  302538 start.go:563] Will wait 60s for crictl version
	I0920 19:05:45.330527  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.334010  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:45.373622  302538 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:45.373736  302538 ssh_runner.go:195] Run: crio --version
	I0920 19:05:45.401279  302538 ssh_runner.go:195] Run: crio --version
	I0920 19:05:45.430445  302538 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:05:45.431717  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:45.434768  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:45.435094  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:45.435121  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:45.435335  302538 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:45.439275  302538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:45.451300  302538 kubeadm.go:883] updating cluster {Name:no-preload-037711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:45.451461  302538 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:05:45.451502  302538 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:45.485045  302538 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:05:45.485073  302538 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 19:05:45.485130  302538 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:45.485150  302538 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.485168  302538 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.485182  302538 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.485231  302538 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.485171  302538 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.485305  302538 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 19:05:45.485450  302538 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.486694  302538 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.486700  302538 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.486808  302538 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.486808  302538 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 19:05:45.486829  302538 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.486894  302538 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:45.486829  302538 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.487055  302538 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.708911  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 19:05:45.773014  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.815176  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.818274  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.818298  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.829644  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.850791  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.862553  302538 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 19:05:45.862616  302538 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.862680  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.907516  302538 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 19:05:45.907573  302538 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.907629  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.938640  302538 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 19:05:45.938715  302538 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.938755  302538 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 19:05:45.938799  302538 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.938845  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.938770  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.947658  302538 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 19:05:45.947706  302538 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.947757  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.965105  302538 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 19:05:45.965161  302538 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.965166  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.965191  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.965248  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.965282  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.965344  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.965350  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.044513  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.044640  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:46.077894  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:46.080113  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:46.080170  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:46.080239  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.155137  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.155188  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:46.208431  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:46.208477  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:46.208521  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.208565  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:46.290657  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.290694  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 19:05:46.290794  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.325206  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 19:05:46.325353  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:46.353181  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 19:05:46.353289  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 19:05:46.353307  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 19:05:46.353312  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:46.353331  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 19:05:46.353383  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:46.353418  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:46.353384  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.353512  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.379873  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 19:05:46.379934  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 19:05:46.379979  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 19:05:46.380024  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 19:05:46.379981  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 19:05:46.380134  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:43.105005  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:45.105781  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:47.604822  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:44.235886  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:44.736349  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.235783  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.736619  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:46.236082  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:46.736609  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:47.236078  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:47.736130  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:48.236218  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:48.735858  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.784555  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:47.785125  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:46.622278  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:48.339532  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.985991382s)
	I0920 19:05:48.339568  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 19:05:48.339594  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:48.339653  302538 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.959488823s)
	I0920 19:05:48.339685  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 19:05:48.339665  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:48.339742  302538 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.717432253s)
	I0920 19:05:48.339787  302538 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 19:05:48.339815  302538 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:48.339842  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:48.343725  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:50.823508  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.483779728s)
	I0920 19:05:50.823559  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.479795238s)
	I0920 19:05:50.823593  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 19:05:50.823637  302538 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:50.823649  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:50.823692  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:49.607326  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:51.609055  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:49.236645  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:49.736183  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.236642  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.735713  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:51.235862  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:51.736479  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:52.235726  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:52.735939  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:53.235759  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:53.736290  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.284090  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:52.284996  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:54.127303  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.303601736s)
	I0920 19:05:54.127415  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:54.127327  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.303608969s)
	I0920 19:05:54.127455  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 19:05:54.127488  302538 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:54.127530  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:56.202021  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.074563861s)
	I0920 19:05:56.202050  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.074501802s)
	I0920 19:05:56.202076  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 19:05:56.202095  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 19:05:56.202118  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:56.202184  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:56.202202  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:05:56.207141  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 19:05:54.104909  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:56.105373  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:54.235840  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:54.735817  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:55.235812  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:55.736410  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:56.236203  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:56.735713  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:57.235777  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:57.735835  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:58.236448  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:58.736010  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:54.783661  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:56.784770  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:58.785122  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:58.166303  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.964088667s)
	I0920 19:05:58.166340  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 19:05:58.166369  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:58.166424  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:59.625258  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.458808535s)
	I0920 19:05:59.625294  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 19:05:59.625318  302538 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:05:59.625361  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:06:00.572722  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 19:06:00.572768  302538 cache_images.go:123] Successfully loaded all cached images
	I0920 19:06:00.572774  302538 cache_images.go:92] duration metric: took 15.087689513s to LoadCachedImages
	I0920 19:06:00.572788  302538 kubeadm.go:934] updating node { 192.168.61.136 8443 v1.31.1 crio true true} ...
	I0920 19:06:00.572917  302538 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-037711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:06:00.572994  302538 ssh_runner.go:195] Run: crio config
	I0920 19:06:00.619832  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:06:00.619861  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:06:00.619875  302538 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:06:00.619910  302538 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-037711 NodeName:no-preload-037711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:06:00.620110  302538 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-037711"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:06:00.620181  302538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:06:00.630434  302538 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:06:00.630513  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:06:00.639447  302538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 19:06:00.656195  302538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:06:00.675718  302538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0920 19:06:00.709191  302538 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0920 19:06:00.713271  302538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:06:00.726826  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:06:00.850927  302538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:06:00.869014  302538 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711 for IP: 192.168.61.136
	I0920 19:06:00.869044  302538 certs.go:194] generating shared ca certs ...
	I0920 19:06:00.869109  302538 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:06:00.869331  302538 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:06:00.869393  302538 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:06:00.869405  302538 certs.go:256] generating profile certs ...
	I0920 19:06:00.869507  302538 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/client.key
	I0920 19:06:00.869589  302538 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.key.b5da98fb
	I0920 19:06:00.869654  302538 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.key
	I0920 19:06:00.869831  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:06:00.869877  302538 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:06:00.869890  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:06:00.869947  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:06:00.869981  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:06:00.870010  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:06:00.870068  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:06:00.870802  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:06:00.922699  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:06:00.953401  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:06:00.996889  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:06:01.024682  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 19:06:01.050412  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:06:01.081212  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:06:01.108337  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:06:01.133628  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:06:01.158805  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:06:01.186888  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:06:01.211771  302538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:06:01.229448  302538 ssh_runner.go:195] Run: openssl version
	I0920 19:06:01.235289  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:06:01.246775  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.251410  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.251472  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.257271  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:06:01.268229  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:06:01.280431  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.285643  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.285736  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.291772  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:06:01.302858  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:06:01.314034  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.319160  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.319235  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.325450  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:06:01.336803  302538 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:06:01.341439  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:06:01.347592  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:06:01.354109  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:06:01.360513  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:06:01.366749  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:06:01.372898  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:06:01.379101  302538 kubeadm.go:392] StartCluster: {Name:no-preload-037711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:06:01.379228  302538 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:06:01.379280  302538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:06:01.416896  302538 cri.go:89] found id: ""
	I0920 19:06:01.416972  302538 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:58.606203  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:00.606802  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:59.236283  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:59.736440  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:00.236142  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:00.735772  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.236360  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.736388  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:02.236462  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:02.736742  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.236457  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.736705  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.284596  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:03.784495  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:01.428611  302538 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:06:01.428636  302538 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:06:01.428685  302538 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:06:01.439392  302538 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:06:01.440512  302538 kubeconfig.go:125] found "no-preload-037711" server: "https://192.168.61.136:8443"
	I0920 19:06:01.442938  302538 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:06:01.452938  302538 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.136
	I0920 19:06:01.452982  302538 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:06:01.452999  302538 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:06:01.453062  302538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:06:01.487878  302538 cri.go:89] found id: ""
	I0920 19:06:01.487967  302538 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:06:01.506032  302538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:06:01.516536  302538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:06:01.516562  302538 kubeadm.go:157] found existing configuration files:
	
	I0920 19:06:01.516609  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:06:01.526718  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:06:01.526790  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:06:01.536809  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:06:01.546172  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:06:01.546243  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:06:01.556211  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:06:01.565796  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:06:01.565869  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:06:01.577089  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:06:01.587862  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:06:01.587985  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:06:01.598666  302538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:06:01.610018  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:01.740046  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.566817  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.784258  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.848752  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.933469  302538 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:06:02.933579  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.434385  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.933975  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.962422  302538 api_server.go:72] duration metric: took 1.028951755s to wait for apiserver process to appear ...
	I0920 19:06:03.962453  302538 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:06:03.962485  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:03.963119  302538 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": dial tcp 192.168.61.136:8443: connect: connection refused
	I0920 19:06:04.462843  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.443140  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:06:06.443178  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:06:06.443196  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.485554  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:06:06.485597  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:06:06.485614  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.566023  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:06.566068  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:06.963116  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.972764  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:06.972804  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:07.463432  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:07.470963  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:07.471000  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:07.962553  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:07.967724  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0920 19:06:07.975215  302538 api_server.go:141] control plane version: v1.31.1
	I0920 19:06:07.975248  302538 api_server.go:131] duration metric: took 4.01278814s to wait for apiserver health ...
	I0920 19:06:07.975258  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:06:07.975267  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:06:07.977455  302538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:06:03.106079  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:05.609475  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:04.236005  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:04.735854  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:05.236716  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:05.736668  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.235839  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.736412  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:07.236224  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:07.735830  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:08.235800  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:08.736645  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.284930  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:08.784854  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:07.979099  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:06:07.991210  302538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:06:08.016110  302538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:06:08.031124  302538 system_pods.go:59] 8 kube-system pods found
	I0920 19:06:08.031177  302538 system_pods.go:61] "coredns-7c65d6cfc9-8gmsq" [91d89ad2-f899-464c-b351-a0773c16223b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:06:08.031191  302538 system_pods.go:61] "etcd-no-preload-037711" [5b353ad3-0389-4e3d-b5c3-2f2bc65db200] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:06:08.031203  302538 system_pods.go:61] "kube-apiserver-no-preload-037711" [b19002c7-f891-4bc1-a2f0-0f6beebb3987] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:06:08.031247  302538 system_pods.go:61] "kube-controller-manager-no-preload-037711" [a5b1951d-7189-4ee3-bc28-bed058048ebb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:06:08.031262  302538 system_pods.go:61] "kube-proxy-zzmkv" [c8f4695b-eefd-407a-9b7c-d5078632d120] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:06:08.031270  302538 system_pods.go:61] "kube-scheduler-no-preload-037711" [b44824ba-52ad-4e86-9408-118f0e1852d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:06:08.031280  302538 system_pods.go:61] "metrics-server-6867b74b74-7xpgm" [f6280d56-5be4-475f-91da-2862e992868f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:06:08.031290  302538 system_pods.go:61] "storage-provisioner" [d1efb64f-d2a9-4bb4-9bc3-c643c415fcf2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:06:08.031300  302538 system_pods.go:74] duration metric: took 15.160935ms to wait for pod list to return data ...
	I0920 19:06:08.031310  302538 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:06:08.035903  302538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:06:08.035953  302538 node_conditions.go:123] node cpu capacity is 2
	I0920 19:06:08.035968  302538 node_conditions.go:105] duration metric: took 4.652846ms to run NodePressure ...
	I0920 19:06:08.035995  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:08.404721  302538 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:06:08.409400  302538 kubeadm.go:739] kubelet initialised
	I0920 19:06:08.409423  302538 kubeadm.go:740] duration metric: took 4.670172ms waiting for restarted kubelet to initialise ...
	I0920 19:06:08.409432  302538 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:06:08.416547  302538 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:10.426817  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:08.107050  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:10.606744  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:09.236127  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:09.735809  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.236585  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.735863  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:11.236700  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:11.736557  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:12.236483  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:12.735695  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:13.235905  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:13.736128  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.785471  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:13.284642  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:12.923811  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.423162  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.926280  302538 pod_ready.go:93] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:15.926318  302538 pod_ready.go:82] duration metric: took 7.509740963s for pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.926332  302538 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.932683  302538 pod_ready.go:93] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:15.932713  302538 pod_ready.go:82] duration metric: took 6.372388ms for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.932725  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:13.111190  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.606371  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:14.236234  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:14.736677  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.236499  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.735667  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:16.235774  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:16.735833  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:17.236149  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:17.735782  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:18.236400  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:18.736460  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.784441  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:18.284748  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:17.938853  302538 pod_ready.go:103] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:19.939569  302538 pod_ready.go:103] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:18.104867  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:20.105870  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:22.605773  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:19.236298  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:19.736672  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.236401  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.735810  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:21.235673  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:21.736112  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:22.235998  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:22.736179  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:23.236680  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:23.736388  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.783320  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:22.783590  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:21.939753  302538 pod_ready.go:93] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:21.939781  302538 pod_ready.go:82] duration metric: took 6.007035191s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:21.939794  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.446396  302538 pod_ready.go:93] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.446425  302538 pod_ready.go:82] duration metric: took 506.622064ms for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.446435  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zzmkv" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.452105  302538 pod_ready.go:93] pod "kube-proxy-zzmkv" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.452130  302538 pod_ready.go:82] duration metric: took 5.688419ms for pod "kube-proxy-zzmkv" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.452139  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.456181  302538 pod_ready.go:93] pod "kube-scheduler-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.456205  302538 pod_ready.go:82] duration metric: took 4.05917ms for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.456215  302538 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:24.463262  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:24.606021  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:27.105497  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:24.236369  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:24.736082  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:25.236694  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:25.736346  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.236075  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.736666  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:27.236418  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:27.736656  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:28.235972  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:28.735743  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:24.783673  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:26.783960  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.283970  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:26.962413  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.462423  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.606628  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:32.105603  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.236688  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:29.736132  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:30.236404  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:30.735733  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.236364  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.736031  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:32.236457  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:32.735751  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:33.236371  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:33.736474  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.284572  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:33.286630  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:31.464686  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:33.962309  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:35.963445  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:34.105897  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:36.605140  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:34.236387  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:34.236472  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:34.276702  303486 cri.go:89] found id: ""
	I0920 19:06:34.276735  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.276747  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:34.276758  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:34.276815  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:34.312886  303486 cri.go:89] found id: ""
	I0920 19:06:34.312923  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.312935  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:34.312950  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:34.313024  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:34.347199  303486 cri.go:89] found id: ""
	I0920 19:06:34.347240  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.347250  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:34.347258  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:34.347332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:34.383077  303486 cri.go:89] found id: ""
	I0920 19:06:34.383110  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.383121  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:34.383130  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:34.383202  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:34.421184  303486 cri.go:89] found id: ""
	I0920 19:06:34.421212  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.421222  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:34.421231  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:34.421304  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:34.459964  303486 cri.go:89] found id: ""
	I0920 19:06:34.459998  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.460009  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:34.460018  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:34.460085  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:34.493761  303486 cri.go:89] found id: ""
	I0920 19:06:34.493803  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.493815  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:34.493824  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:34.493894  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:34.534406  303486 cri.go:89] found id: ""
	I0920 19:06:34.534445  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.534457  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:34.534471  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:34.534496  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:34.607256  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:34.607297  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:34.644923  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:34.644953  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:34.693574  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:34.693622  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:34.707703  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:34.707742  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:34.846809  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:37.347895  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:37.377651  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:37.377728  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:37.430034  303486 cri.go:89] found id: ""
	I0920 19:06:37.430071  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.430079  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:37.430087  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:37.430156  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:37.467026  303486 cri.go:89] found id: ""
	I0920 19:06:37.467055  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.467063  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:37.467069  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:37.467120  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:37.505791  303486 cri.go:89] found id: ""
	I0920 19:06:37.505824  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.505835  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:37.505845  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:37.505943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:37.541519  303486 cri.go:89] found id: ""
	I0920 19:06:37.541556  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.541568  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:37.541577  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:37.541633  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:37.576088  303486 cri.go:89] found id: ""
	I0920 19:06:37.576126  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.576137  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:37.576146  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:37.576204  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:37.613039  303486 cri.go:89] found id: ""
	I0920 19:06:37.613074  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.613084  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:37.613091  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:37.613153  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:37.656440  303486 cri.go:89] found id: ""
	I0920 19:06:37.656473  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.656482  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:37.656489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:37.656555  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:37.693247  303486 cri.go:89] found id: ""
	I0920 19:06:37.693283  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.693292  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:37.693302  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:37.693321  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:37.769230  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:37.769280  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:37.811016  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:37.811058  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:37.865729  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:37.865773  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:37.880056  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:37.880094  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:37.956402  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:35.783789  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:37.787063  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:38.461824  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.465028  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:38.605494  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.605606  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.457303  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:40.473769  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:40.473848  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:40.511320  303486 cri.go:89] found id: ""
	I0920 19:06:40.511354  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.511363  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:40.511371  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:40.511433  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:40.547086  303486 cri.go:89] found id: ""
	I0920 19:06:40.547127  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.547138  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:40.547147  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:40.547216  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:40.580969  303486 cri.go:89] found id: ""
	I0920 19:06:40.581010  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.581022  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:40.581035  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:40.581098  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:40.615802  303486 cri.go:89] found id: ""
	I0920 19:06:40.615842  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.615851  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:40.615858  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:40.615931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:40.649398  303486 cri.go:89] found id: ""
	I0920 19:06:40.649444  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.649459  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:40.649467  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:40.649541  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:40.683124  303486 cri.go:89] found id: ""
	I0920 19:06:40.683160  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.683172  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:40.683181  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:40.683251  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:40.718005  303486 cri.go:89] found id: ""
	I0920 19:06:40.718032  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.718040  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:40.718047  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:40.718107  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:40.751965  303486 cri.go:89] found id: ""
	I0920 19:06:40.751992  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.752000  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:40.752010  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:40.752024  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:40.765195  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:40.765234  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:40.842287  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:40.842321  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:40.842338  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:40.928384  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:40.928430  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:40.970207  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:40.970242  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:43.526435  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:43.540582  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:43.540680  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:43.576798  303486 cri.go:89] found id: ""
	I0920 19:06:43.576837  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.576846  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:43.576852  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:43.576916  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:43.615261  303486 cri.go:89] found id: ""
	I0920 19:06:43.615290  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.615298  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:43.615305  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:43.615359  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:43.651214  303486 cri.go:89] found id: ""
	I0920 19:06:43.651251  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.651264  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:43.651277  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:43.651338  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:43.684483  303486 cri.go:89] found id: ""
	I0920 19:06:43.684523  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.684535  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:43.684544  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:43.684614  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:43.720996  303486 cri.go:89] found id: ""
	I0920 19:06:43.721026  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.721035  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:43.721041  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:43.721107  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:43.764445  303486 cri.go:89] found id: ""
	I0920 19:06:43.764482  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.764493  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:43.764501  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:43.764564  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:43.808848  303486 cri.go:89] found id: ""
	I0920 19:06:43.808878  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.808888  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:43.808897  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:43.808968  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:43.845462  303486 cri.go:89] found id: ""
	I0920 19:06:43.845491  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.845500  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:43.845511  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:43.845525  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:43.896550  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:43.896596  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:43.909243  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:43.909272  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:06:40.284735  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:42.783363  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:42.962289  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:44.963071  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:43.106353  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:45.606296  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	W0920 19:06:43.987455  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:43.987474  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:43.987491  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:44.063585  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:44.063629  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:46.602859  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:46.617286  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:46.617357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:46.653643  303486 cri.go:89] found id: ""
	I0920 19:06:46.653681  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.653693  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:46.653702  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:46.653778  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:46.691169  303486 cri.go:89] found id: ""
	I0920 19:06:46.691198  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.691206  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:46.691213  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:46.691271  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:46.725498  303486 cri.go:89] found id: ""
	I0920 19:06:46.725527  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.725538  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:46.725545  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:46.725614  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:46.758850  303486 cri.go:89] found id: ""
	I0920 19:06:46.758876  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.758884  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:46.758891  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:46.758942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:46.793648  303486 cri.go:89] found id: ""
	I0920 19:06:46.793683  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.793692  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:46.793699  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:46.793755  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:46.832908  303486 cri.go:89] found id: ""
	I0920 19:06:46.832940  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.832947  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:46.832953  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:46.833019  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:46.866450  303486 cri.go:89] found id: ""
	I0920 19:06:46.866502  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.866513  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:46.866522  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:46.866593  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:46.901966  303486 cri.go:89] found id: ""
	I0920 19:06:46.902001  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.902013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:46.902026  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:46.902041  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:46.948901  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:46.948946  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:46.963489  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:46.963534  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:47.041701  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:47.041722  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:47.041736  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:47.124320  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:47.124364  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:44.783818  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:46.784000  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:48.785175  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:46.963700  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:49.462018  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:48.104361  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:50.105520  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:52.605799  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:49.664255  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:49.677240  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:49.677322  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:49.712375  303486 cri.go:89] found id: ""
	I0920 19:06:49.712401  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.712409  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:49.712415  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:49.712476  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:49.747682  303486 cri.go:89] found id: ""
	I0920 19:06:49.747713  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.747721  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:49.747727  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:49.747783  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:49.782276  303486 cri.go:89] found id: ""
	I0920 19:06:49.782319  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.782329  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:49.782337  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:49.782400  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:49.822625  303486 cri.go:89] found id: ""
	I0920 19:06:49.822661  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.822672  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:49.822680  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:49.822751  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:49.862159  303486 cri.go:89] found id: ""
	I0920 19:06:49.862192  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.862202  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:49.862212  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:49.862281  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:49.897552  303486 cri.go:89] found id: ""
	I0920 19:06:49.897587  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.897595  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:49.897608  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:49.897667  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:49.931667  303486 cri.go:89] found id: ""
	I0920 19:06:49.931698  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.931709  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:49.931718  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:49.931774  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:49.969206  303486 cri.go:89] found id: ""
	I0920 19:06:49.969236  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.969244  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:49.969254  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:49.969266  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:50.019287  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:50.019328  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:50.033080  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:50.033113  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:50.106415  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:50.106442  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:50.106459  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:50.183710  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:50.183762  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:52.725443  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:52.739293  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:52.739386  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:52.772412  303486 cri.go:89] found id: ""
	I0920 19:06:52.772445  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.772454  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:52.772461  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:52.772528  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:52.811153  303486 cri.go:89] found id: ""
	I0920 19:06:52.811189  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.811197  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:52.811204  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:52.811260  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:52.848709  303486 cri.go:89] found id: ""
	I0920 19:06:52.848740  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.848749  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:52.848755  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:52.848811  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:52.883358  303486 cri.go:89] found id: ""
	I0920 19:06:52.883387  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.883394  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:52.883400  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:52.883455  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:52.917838  303486 cri.go:89] found id: ""
	I0920 19:06:52.917874  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.917893  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:52.917912  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:52.917982  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:52.952340  303486 cri.go:89] found id: ""
	I0920 19:06:52.952378  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.952387  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:52.952396  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:52.952471  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:52.986433  303486 cri.go:89] found id: ""
	I0920 19:06:52.986469  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.986478  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:52.986486  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:52.986582  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:53.024209  303486 cri.go:89] found id: ""
	I0920 19:06:53.024241  303486 logs.go:276] 0 containers: []
	W0920 19:06:53.024249  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:53.024260  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:53.024272  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:53.075336  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:53.075374  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:53.090761  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:53.090802  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:53.167883  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:53.167915  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:53.167933  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:53.242003  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:53.242044  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:50.785624  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:53.284212  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:51.462197  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:53.962545  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:55.962875  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:54.607806  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:57.105146  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:55.779107  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:55.793713  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:55.793802  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:55.829411  303486 cri.go:89] found id: ""
	I0920 19:06:55.829441  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.829450  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:55.829456  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:55.829513  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:55.864578  303486 cri.go:89] found id: ""
	I0920 19:06:55.864606  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.864617  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:55.864625  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:55.864686  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:55.897004  303486 cri.go:89] found id: ""
	I0920 19:06:55.897033  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.897041  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:55.897048  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:55.897106  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:55.931019  303486 cri.go:89] found id: ""
	I0920 19:06:55.931055  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.931066  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:55.931076  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:55.931141  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:55.966595  303486 cri.go:89] found id: ""
	I0920 19:06:55.966625  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.966635  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:55.966643  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:55.966693  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:55.999707  303486 cri.go:89] found id: ""
	I0920 19:06:55.999736  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.999747  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:55.999756  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:55.999825  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:56.034323  303486 cri.go:89] found id: ""
	I0920 19:06:56.034361  303486 logs.go:276] 0 containers: []
	W0920 19:06:56.034371  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:56.034377  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:56.034433  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:56.069019  303486 cri.go:89] found id: ""
	I0920 19:06:56.069048  303486 logs.go:276] 0 containers: []
	W0920 19:06:56.069056  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:56.069066  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:56.069077  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:56.122820  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:56.122860  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:56.136924  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:56.136966  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:56.216255  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:56.216284  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:56.216299  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:56.293461  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:56.293506  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:58.831252  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:58.844410  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:58.844474  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:58.877508  303486 cri.go:89] found id: ""
	I0920 19:06:58.877539  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.877547  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:58.877555  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:58.877613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:58.911284  303486 cri.go:89] found id: ""
	I0920 19:06:58.911315  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.911323  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:58.911329  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:58.911382  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:58.944646  303486 cri.go:89] found id: ""
	I0920 19:06:58.944675  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.944682  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:58.944688  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:58.944739  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:55.784379  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.283450  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.461839  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:00.461977  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:59.108066  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:01.605247  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.979752  303486 cri.go:89] found id: ""
	I0920 19:06:58.979787  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.979798  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:58.979807  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:58.979864  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:59.016613  303486 cri.go:89] found id: ""
	I0920 19:06:59.016649  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.016661  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:59.016670  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:59.016735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:59.052012  303486 cri.go:89] found id: ""
	I0920 19:06:59.052039  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.052047  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:59.052054  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:59.052106  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:59.090102  303486 cri.go:89] found id: ""
	I0920 19:06:59.090140  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.090152  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:59.090159  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:59.090213  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:59.128028  303486 cri.go:89] found id: ""
	I0920 19:06:59.128057  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.128068  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:59.128080  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:59.128096  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:59.142966  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:59.143012  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:59.227311  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:59.227336  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:59.227357  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:59.308319  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:59.308366  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:59.347299  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:59.347336  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:01.897644  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:01.912876  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:01.912951  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:01.956550  303486 cri.go:89] found id: ""
	I0920 19:07:01.956679  303486 logs.go:276] 0 containers: []
	W0920 19:07:01.956690  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:01.956700  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:01.956765  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:01.995391  303486 cri.go:89] found id: ""
	I0920 19:07:01.995425  303486 logs.go:276] 0 containers: []
	W0920 19:07:01.995433  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:01.995440  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:01.995501  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:02.031149  303486 cri.go:89] found id: ""
	I0920 19:07:02.031181  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.031193  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:02.031202  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:02.031273  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:02.065856  303486 cri.go:89] found id: ""
	I0920 19:07:02.065885  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.065894  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:02.065924  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:02.065981  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:02.101974  303486 cri.go:89] found id: ""
	I0920 19:07:02.102018  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.102032  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:02.102041  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:02.102115  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:02.138108  303486 cri.go:89] found id: ""
	I0920 19:07:02.138142  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.138151  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:02.138156  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:02.138217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:02.170136  303486 cri.go:89] found id: ""
	I0920 19:07:02.170165  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.170173  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:02.170179  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:02.170244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:02.203944  303486 cri.go:89] found id: ""
	I0920 19:07:02.203969  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.203978  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:02.203991  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:02.204008  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:02.256635  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:02.256679  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:02.270266  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:02.270303  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:02.341145  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:02.341182  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:02.341199  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:02.415133  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:02.415175  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:00.283726  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:02.285304  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:02.462310  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:04.462872  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:03.605300  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:06.105872  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:04.952448  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:04.966632  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:04.966702  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:05.001098  303486 cri.go:89] found id: ""
	I0920 19:07:05.001131  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.001141  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:05.001149  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:05.001217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:05.038160  303486 cri.go:89] found id: ""
	I0920 19:07:05.038186  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.038196  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:05.038202  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:05.038260  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:05.083301  303486 cri.go:89] found id: ""
	I0920 19:07:05.083346  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.083357  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:05.083365  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:05.083436  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:05.118916  303486 cri.go:89] found id: ""
	I0920 19:07:05.118952  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.118964  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:05.118972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:05.119065  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:05.157452  303486 cri.go:89] found id: ""
	I0920 19:07:05.157485  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.157496  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:05.157511  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:05.157587  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:05.197100  303486 cri.go:89] found id: ""
	I0920 19:07:05.197133  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.197143  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:05.197152  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:05.197225  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:05.231286  303486 cri.go:89] found id: ""
	I0920 19:07:05.231317  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.231328  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:05.231336  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:05.231409  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:05.269798  303486 cri.go:89] found id: ""
	I0920 19:07:05.269835  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.269847  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:05.269862  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:05.269882  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:05.310029  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:05.310068  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:05.360493  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:05.360537  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:05.373771  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:05.373815  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:05.449860  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:05.449886  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:05.449924  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:08.034520  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:08.049970  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:08.050040  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:08.084683  303486 cri.go:89] found id: ""
	I0920 19:07:08.084714  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.084724  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:08.084731  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:08.084799  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:08.121150  303486 cri.go:89] found id: ""
	I0920 19:07:08.121176  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.121183  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:08.121190  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:08.121244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:08.157830  303486 cri.go:89] found id: ""
	I0920 19:07:08.157865  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.157877  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:08.157891  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:08.157967  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:08.191040  303486 cri.go:89] found id: ""
	I0920 19:07:08.191082  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.191094  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:08.191102  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:08.191169  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:08.230194  303486 cri.go:89] found id: ""
	I0920 19:07:08.230230  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.230239  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:08.230246  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:08.230304  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:08.268526  303486 cri.go:89] found id: ""
	I0920 19:07:08.268558  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.268566  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:08.268573  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:08.268631  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:08.302383  303486 cri.go:89] found id: ""
	I0920 19:07:08.302411  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.302420  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:08.302428  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:08.302492  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:08.336435  303486 cri.go:89] found id: ""
	I0920 19:07:08.336469  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.336479  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:08.336491  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:08.336505  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:08.418086  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:08.418129  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:08.458355  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:08.458391  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:08.507017  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:08.507062  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:08.522701  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:08.522737  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:08.592777  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:04.784475  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:07.283612  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:09.286218  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:06.963106  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:09.462861  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:08.108458  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:10.605447  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:12.605992  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:11.093689  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:11.107438  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:11.107503  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:11.139701  303486 cri.go:89] found id: ""
	I0920 19:07:11.139742  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.139755  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:11.139765  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:11.139822  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:11.196143  303486 cri.go:89] found id: ""
	I0920 19:07:11.196182  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.196191  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:11.196197  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:11.196268  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:11.232121  303486 cri.go:89] found id: ""
	I0920 19:07:11.232156  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.232164  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:11.232171  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:11.232238  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:11.267307  303486 cri.go:89] found id: ""
	I0920 19:07:11.267338  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.267349  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:11.267358  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:11.267423  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:11.306583  303486 cri.go:89] found id: ""
	I0920 19:07:11.306614  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.306623  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:11.306631  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:11.306698  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:11.348162  303486 cri.go:89] found id: ""
	I0920 19:07:11.348188  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.348196  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:11.348203  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:11.348257  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:11.383612  303486 cri.go:89] found id: ""
	I0920 19:07:11.383649  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.383660  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:11.383669  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:11.383736  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:11.417538  303486 cri.go:89] found id: ""
	I0920 19:07:11.417575  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.417583  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:11.417593  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:11.417609  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:11.470242  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:11.470282  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:11.485448  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:11.485480  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:11.559466  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:11.559495  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:11.559513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:11.636080  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:11.636133  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:11.783461  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:13.783785  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:11.462940  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:13.963340  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:14.609611  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:17.105222  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:14.177278  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:14.190413  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:14.190483  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:14.224238  303486 cri.go:89] found id: ""
	I0920 19:07:14.224264  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.224272  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:14.224278  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:14.224330  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:14.265253  303486 cri.go:89] found id: ""
	I0920 19:07:14.265285  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.265297  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:14.265304  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:14.265357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:14.300591  303486 cri.go:89] found id: ""
	I0920 19:07:14.300619  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.300633  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:14.300639  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:14.300695  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:14.335638  303486 cri.go:89] found id: ""
	I0920 19:07:14.335669  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.335677  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:14.335683  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:14.335735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:14.369291  303486 cri.go:89] found id: ""
	I0920 19:07:14.369328  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.369336  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:14.369344  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:14.369397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:14.404913  303486 cri.go:89] found id: ""
	I0920 19:07:14.404947  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.404958  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:14.404967  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:14.405034  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:14.438793  303486 cri.go:89] found id: ""
	I0920 19:07:14.438834  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.438845  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:14.438856  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:14.438926  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:14.475268  303486 cri.go:89] found id: ""
	I0920 19:07:14.475297  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.475305  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:14.475321  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:14.475342  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:14.528066  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:14.528126  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:14.542850  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:14.542891  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:14.612772  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:14.612800  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:14.612819  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:14.694528  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:14.694579  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:17.234389  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:17.247479  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:17.247544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:17.285461  303486 cri.go:89] found id: ""
	I0920 19:07:17.285488  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.285496  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:17.285502  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:17.285553  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:17.320580  303486 cri.go:89] found id: ""
	I0920 19:07:17.320606  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.320614  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:17.320620  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:17.320677  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:17.356405  303486 cri.go:89] found id: ""
	I0920 19:07:17.356440  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.356462  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:17.356471  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:17.356526  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:17.391268  303486 cri.go:89] found id: ""
	I0920 19:07:17.391301  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.391309  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:17.391316  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:17.391381  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:17.429886  303486 cri.go:89] found id: ""
	I0920 19:07:17.429938  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.429950  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:17.429959  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:17.430022  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:17.466059  303486 cri.go:89] found id: ""
	I0920 19:07:17.466093  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.466104  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:17.466111  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:17.466176  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:17.501128  303486 cri.go:89] found id: ""
	I0920 19:07:17.501159  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.501168  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:17.501174  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:17.501247  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:17.536969  303486 cri.go:89] found id: ""
	I0920 19:07:17.536999  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.537007  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:17.537016  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:17.537031  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:17.592071  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:17.592119  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:17.609022  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:17.609057  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:17.696393  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:17.696420  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:17.696434  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:17.778077  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:17.778122  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:15.785002  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:18.284101  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:16.463809  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:18.964348  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:19.604758  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:21.608192  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:20.319211  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:20.332158  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:20.332235  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:20.366195  303486 cri.go:89] found id: ""
	I0920 19:07:20.366230  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.366241  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:20.366250  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:20.366313  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:20.401786  303486 cri.go:89] found id: ""
	I0920 19:07:20.401819  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.401829  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:20.401846  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:20.401943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:20.433684  303486 cri.go:89] found id: ""
	I0920 19:07:20.433711  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.433719  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:20.433725  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:20.433783  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:20.469495  303486 cri.go:89] found id: ""
	I0920 19:07:20.469524  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.469535  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:20.469543  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:20.469613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:20.502214  303486 cri.go:89] found id: ""
	I0920 19:07:20.502245  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.502256  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:20.502263  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:20.502329  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:20.535829  303486 cri.go:89] found id: ""
	I0920 19:07:20.535867  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.535879  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:20.535887  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:20.535952  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:20.569605  303486 cri.go:89] found id: ""
	I0920 19:07:20.569635  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.569643  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:20.569654  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:20.569726  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:20.603676  303486 cri.go:89] found id: ""
	I0920 19:07:20.603699  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.603706  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:20.603715  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:20.603726  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:20.656645  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:20.656692  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:20.671077  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:20.671107  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:20.740996  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:20.741028  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:20.741046  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:20.820541  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:20.820592  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:23.362973  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:23.380350  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:23.380432  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:23.423145  303486 cri.go:89] found id: ""
	I0920 19:07:23.423183  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.423193  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:23.423202  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:23.423272  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:23.459019  303486 cri.go:89] found id: ""
	I0920 19:07:23.459057  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.459068  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:23.459077  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:23.459144  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:23.502876  303486 cri.go:89] found id: ""
	I0920 19:07:23.502908  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.502920  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:23.502929  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:23.502994  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:23.538440  303486 cri.go:89] found id: ""
	I0920 19:07:23.538471  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.538481  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:23.538489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:23.538552  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:23.575164  303486 cri.go:89] found id: ""
	I0920 19:07:23.575199  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.575211  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:23.575220  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:23.575296  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:23.610449  303486 cri.go:89] found id: ""
	I0920 19:07:23.610480  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.610489  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:23.610495  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:23.610562  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:23.644164  303486 cri.go:89] found id: ""
	I0920 19:07:23.644195  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.644203  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:23.644209  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:23.644275  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:23.684379  303486 cri.go:89] found id: ""
	I0920 19:07:23.684417  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.684428  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:23.684442  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:23.684459  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:23.762838  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:23.762885  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:23.805616  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:23.805650  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:23.857080  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:23.857122  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:23.870602  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:23.870635  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:23.941187  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:20.284264  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:22.284388  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:24.285108  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:21.462493  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:23.467933  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:25.963071  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:24.106087  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:26.605442  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:26.441571  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:26.455091  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:26.455185  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:26.489658  303486 cri.go:89] found id: ""
	I0920 19:07:26.489696  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.489707  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:26.489716  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:26.489773  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:26.528829  303486 cri.go:89] found id: ""
	I0920 19:07:26.528865  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.528878  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:26.528886  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:26.528966  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:26.568402  303486 cri.go:89] found id: ""
	I0920 19:07:26.568429  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.568443  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:26.568450  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:26.568503  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:26.606654  303486 cri.go:89] found id: ""
	I0920 19:07:26.606683  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.606693  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:26.606701  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:26.606764  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:26.640825  303486 cri.go:89] found id: ""
	I0920 19:07:26.640856  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.640864  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:26.640871  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:26.640934  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:26.677023  303486 cri.go:89] found id: ""
	I0920 19:07:26.677054  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.677062  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:26.677068  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:26.677123  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:26.712921  303486 cri.go:89] found id: ""
	I0920 19:07:26.712956  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.712964  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:26.712971  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:26.713031  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:26.747750  303486 cri.go:89] found id: ""
	I0920 19:07:26.747778  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.747786  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:26.747796  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:26.747810  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:26.799240  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:26.799283  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:26.813197  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:26.813233  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:26.882751  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:26.882780  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:26.882799  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:26.965108  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:26.965146  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:26.784306  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:29.283573  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:28.461526  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:30.462242  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:28.606602  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:31.106657  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:29.503960  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:29.516601  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:29.516669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:29.555581  303486 cri.go:89] found id: ""
	I0920 19:07:29.555622  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.555632  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:29.555640  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:29.555711  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:29.593858  303486 cri.go:89] found id: ""
	I0920 19:07:29.593885  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.593923  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:29.593937  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:29.593990  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:29.629507  303486 cri.go:89] found id: ""
	I0920 19:07:29.629538  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.629548  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:29.629557  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:29.629616  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:29.662880  303486 cri.go:89] found id: ""
	I0920 19:07:29.662913  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.662921  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:29.662928  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:29.662976  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:29.695422  303486 cri.go:89] found id: ""
	I0920 19:07:29.695448  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.695458  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:29.695466  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:29.695531  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:29.730641  303486 cri.go:89] found id: ""
	I0920 19:07:29.730673  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.730685  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:29.730693  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:29.730756  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:29.764186  303486 cri.go:89] found id: ""
	I0920 19:07:29.764220  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.764229  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:29.764238  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:29.764302  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:29.804146  303486 cri.go:89] found id: ""
	I0920 19:07:29.804174  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.804182  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:29.804191  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:29.804204  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:29.885573  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:29.885633  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:29.924619  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:29.924667  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:29.978187  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:29.978230  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:29.992161  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:29.992190  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:30.069767  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:32.570197  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:32.583160  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:32.583244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:32.620842  303486 cri.go:89] found id: ""
	I0920 19:07:32.620870  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.620881  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:32.620899  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:32.620958  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:32.657169  303486 cri.go:89] found id: ""
	I0920 19:07:32.657205  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.657216  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:32.657225  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:32.657292  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:32.694773  303486 cri.go:89] found id: ""
	I0920 19:07:32.694802  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.694809  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:32.694815  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:32.694882  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:32.733318  303486 cri.go:89] found id: ""
	I0920 19:07:32.733350  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.733360  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:32.733370  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:32.733436  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:32.766019  303486 cri.go:89] found id: ""
	I0920 19:07:32.766052  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.766062  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:32.766070  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:32.766138  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:32.801412  303486 cri.go:89] found id: ""
	I0920 19:07:32.801443  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.801454  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:32.801463  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:32.801533  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:32.833743  303486 cri.go:89] found id: ""
	I0920 19:07:32.833771  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.833779  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:32.833787  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:32.833847  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:32.866775  303486 cri.go:89] found id: ""
	I0920 19:07:32.866803  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.866811  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:32.866821  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:32.866839  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:32.919257  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:32.919310  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:32.933554  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:32.933602  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:33.002657  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:33.002702  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:33.002721  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:33.081271  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:33.081316  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:31.284488  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:33.782998  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:32.462645  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:34.963285  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:33.609072  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:36.107460  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:35.627131  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:35.640958  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:35.641032  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:35.675943  303486 cri.go:89] found id: ""
	I0920 19:07:35.675976  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.675984  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:35.675991  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:35.676044  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:35.710075  303486 cri.go:89] found id: ""
	I0920 19:07:35.710104  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.710116  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:35.710124  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:35.710194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:35.747890  303486 cri.go:89] found id: ""
	I0920 19:07:35.747920  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.747931  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:35.747939  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:35.748004  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:35.786197  303486 cri.go:89] found id: ""
	I0920 19:07:35.786231  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.786242  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:35.786252  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:35.786314  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:35.819109  303486 cri.go:89] found id: ""
	I0920 19:07:35.819146  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.819158  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:35.819168  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:35.819244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:35.853244  303486 cri.go:89] found id: ""
	I0920 19:07:35.853282  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.853292  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:35.853301  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:35.853378  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:35.886864  303486 cri.go:89] found id: ""
	I0920 19:07:35.886897  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.886908  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:35.886917  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:35.886986  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:35.920872  303486 cri.go:89] found id: ""
	I0920 19:07:35.920906  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.920917  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:35.920939  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:35.920957  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:35.998741  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:35.998794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:36.040681  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:36.040720  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:36.095848  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:36.095909  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:36.110903  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:36.110939  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:36.186658  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:38.687762  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:38.701640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:38.701708  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:38.734908  303486 cri.go:89] found id: ""
	I0920 19:07:38.734946  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.734956  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:38.734966  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:38.735031  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:38.768062  303486 cri.go:89] found id: ""
	I0920 19:07:38.768100  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.768112  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:38.768120  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:38.768188  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:38.800881  303486 cri.go:89] found id: ""
	I0920 19:07:38.800915  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.800927  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:38.800936  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:38.801004  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:38.835119  303486 cri.go:89] found id: ""
	I0920 19:07:38.835148  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.835156  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:38.835164  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:38.835223  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:38.872677  303486 cri.go:89] found id: ""
	I0920 19:07:38.872712  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.872723  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:38.872733  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:38.872807  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:38.913921  303486 cri.go:89] found id: ""
	I0920 19:07:38.913955  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.913965  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:38.913972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:38.914029  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:35.783443  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.284549  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:36.963668  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.963893  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.608347  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:41.106313  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.951849  303486 cri.go:89] found id: ""
	I0920 19:07:38.951882  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.951893  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:38.951902  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:38.951972  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:38.988117  303486 cri.go:89] found id: ""
	I0920 19:07:38.988149  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.988161  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:38.988177  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:38.988191  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:39.028804  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:39.028843  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:39.083374  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:39.083427  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:39.097434  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:39.097463  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:39.172185  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:39.172213  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:39.172226  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:41.756648  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:41.772358  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:41.772432  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:41.809067  303486 cri.go:89] found id: ""
	I0920 19:07:41.809109  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.809123  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:41.809132  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:41.809191  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:41.853413  303486 cri.go:89] found id: ""
	I0920 19:07:41.853445  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.853457  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:41.853465  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:41.853524  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:41.891536  303486 cri.go:89] found id: ""
	I0920 19:07:41.891569  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.891580  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:41.891588  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:41.891668  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:41.931046  303486 cri.go:89] found id: ""
	I0920 19:07:41.931085  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.931093  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:41.931099  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:41.931155  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:41.968120  303486 cri.go:89] found id: ""
	I0920 19:07:41.968152  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.968164  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:41.968172  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:41.968240  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:42.002478  303486 cri.go:89] found id: ""
	I0920 19:07:42.002512  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.002523  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:42.002532  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:42.002599  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:42.038031  303486 cri.go:89] found id: ""
	I0920 19:07:42.038067  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.038080  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:42.038087  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:42.038150  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:42.072124  303486 cri.go:89] found id: ""
	I0920 19:07:42.072155  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.072166  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:42.072178  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:42.072195  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:42.128217  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:42.128259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:42.142291  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:42.142322  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:42.215278  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:42.215305  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:42.215324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:42.293431  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:42.293476  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:40.784191  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.283580  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:41.463429  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.963059  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.608790  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:46.105338  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:44.836094  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:44.850327  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:44.850397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:44.884595  303486 cri.go:89] found id: ""
	I0920 19:07:44.884624  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.884632  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:44.884639  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:44.884711  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:44.917727  303486 cri.go:89] found id: ""
	I0920 19:07:44.917754  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.917763  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:44.917769  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:44.917837  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:44.955821  303486 cri.go:89] found id: ""
	I0920 19:07:44.955860  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.955871  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:44.955879  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:44.955937  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:44.994543  303486 cri.go:89] found id: ""
	I0920 19:07:44.994579  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.994590  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:44.994598  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:44.994651  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:45.031839  303486 cri.go:89] found id: ""
	I0920 19:07:45.031877  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.031888  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:45.031896  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:45.031962  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:45.070554  303486 cri.go:89] found id: ""
	I0920 19:07:45.070588  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.070601  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:45.070609  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:45.070678  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:45.108727  303486 cri.go:89] found id: ""
	I0920 19:07:45.108760  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.108771  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:45.108779  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:45.108855  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:45.144045  303486 cri.go:89] found id: ""
	I0920 19:07:45.144075  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.144083  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:45.144094  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:45.144108  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:45.185800  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:45.185834  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:45.238364  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:45.238410  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:45.252111  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:45.252145  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:45.329009  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:45.329036  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:45.329051  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:47.912910  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:47.926378  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:47.926458  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:47.961067  303486 cri.go:89] found id: ""
	I0920 19:07:47.961094  303486 logs.go:276] 0 containers: []
	W0920 19:07:47.961103  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:47.961111  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:47.961172  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:48.006680  303486 cri.go:89] found id: ""
	I0920 19:07:48.006717  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.006729  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:48.006738  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:48.006805  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:48.042230  303486 cri.go:89] found id: ""
	I0920 19:07:48.042261  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.042272  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:48.042281  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:48.042349  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:48.080779  303486 cri.go:89] found id: ""
	I0920 19:07:48.080836  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.080850  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:48.080860  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:48.080931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:48.119439  303486 cri.go:89] found id: ""
	I0920 19:07:48.119469  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.119477  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:48.119483  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:48.119536  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:48.156219  303486 cri.go:89] found id: ""
	I0920 19:07:48.156258  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.156269  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:48.156279  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:48.156354  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:48.192112  303486 cri.go:89] found id: ""
	I0920 19:07:48.192151  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.192162  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:48.192170  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:48.192240  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:48.228916  303486 cri.go:89] found id: ""
	I0920 19:07:48.228958  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.228968  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:48.228981  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:48.229003  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:48.284073  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:48.284115  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:48.297677  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:48.297713  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:48.374834  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:48.374860  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:48.374876  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:48.455468  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:48.455512  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:45.284055  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:47.783744  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:46.461832  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:48.462980  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:50.463485  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:48.605035  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:51.105952  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:50.998354  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:51.012827  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:51.012904  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:51.046701  303486 cri.go:89] found id: ""
	I0920 19:07:51.046739  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.046750  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:51.046758  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:51.046827  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:51.083829  303486 cri.go:89] found id: ""
	I0920 19:07:51.083867  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.083878  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:51.083891  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:51.083965  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:51.124126  303486 cri.go:89] found id: ""
	I0920 19:07:51.124170  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.124180  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:51.124187  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:51.124254  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:51.159141  303486 cri.go:89] found id: ""
	I0920 19:07:51.159175  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.159184  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:51.159190  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:51.159253  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:51.192793  303486 cri.go:89] found id: ""
	I0920 19:07:51.192829  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.192840  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:51.192863  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:51.192938  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:51.225489  303486 cri.go:89] found id: ""
	I0920 19:07:51.225515  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.225524  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:51.225530  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:51.225582  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:51.258256  303486 cri.go:89] found id: ""
	I0920 19:07:51.258283  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.258294  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:51.258301  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:51.258363  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:51.292474  303486 cri.go:89] found id: ""
	I0920 19:07:51.292504  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.292512  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:51.292522  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:51.292537  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:51.331386  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:51.331422  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:51.385136  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:51.385182  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:51.400792  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:51.400828  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:51.492771  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:51.492795  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:51.492810  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:49.784132  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:52.284075  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:54.284870  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:52.963813  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:55.464095  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:53.607259  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:56.106592  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:54.074889  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:54.088453  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:54.088534  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:54.125096  303486 cri.go:89] found id: ""
	I0920 19:07:54.125138  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.125159  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:54.125166  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:54.125231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:54.159630  303486 cri.go:89] found id: ""
	I0920 19:07:54.159665  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.159676  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:54.159685  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:54.159759  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:54.195919  303486 cri.go:89] found id: ""
	I0920 19:07:54.195951  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.195965  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:54.195972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:54.196042  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:54.230294  303486 cri.go:89] found id: ""
	I0920 19:07:54.230323  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.230332  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:54.230339  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:54.230396  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:54.266764  303486 cri.go:89] found id: ""
	I0920 19:07:54.266793  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.266800  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:54.266807  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:54.266865  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:54.300704  303486 cri.go:89] found id: ""
	I0920 19:07:54.300731  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.300741  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:54.300750  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:54.300817  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:54.334447  303486 cri.go:89] found id: ""
	I0920 19:07:54.334473  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.334480  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:54.334487  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:54.334546  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:54.369814  303486 cri.go:89] found id: ""
	I0920 19:07:54.369858  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.369866  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:54.369878  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:54.369890  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:54.423088  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:54.423135  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:54.436770  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:54.436801  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:54.510731  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:54.510757  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:54.510773  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:54.593041  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:54.593091  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:57.134030  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:57.147605  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:57.147674  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:57.202662  303486 cri.go:89] found id: ""
	I0920 19:07:57.202690  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.202699  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:57.202705  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:57.202757  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:57.236448  303486 cri.go:89] found id: ""
	I0920 19:07:57.236476  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.236484  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:57.236493  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:57.236558  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:57.269450  303486 cri.go:89] found id: ""
	I0920 19:07:57.269478  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.269485  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:57.269491  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:57.269544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:57.305749  303486 cri.go:89] found id: ""
	I0920 19:07:57.305784  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.305795  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:57.305806  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:57.305877  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:57.339802  303486 cri.go:89] found id: ""
	I0920 19:07:57.339844  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.339857  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:57.339866  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:57.339942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:57.371929  303486 cri.go:89] found id: ""
	I0920 19:07:57.371962  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.371971  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:57.371980  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:57.372051  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:57.405749  303486 cri.go:89] found id: ""
	I0920 19:07:57.405789  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.405802  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:57.405812  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:57.405888  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:57.439259  303486 cri.go:89] found id: ""
	I0920 19:07:57.439291  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.439300  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:57.439310  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:57.439323  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:57.491405  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:57.491450  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:57.505992  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:57.506027  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:57.580598  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:57.580623  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:57.580638  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:57.659475  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:57.659513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:56.783867  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:58.783944  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:57.465789  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:59.963589  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:58.606492  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:01.105967  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:00.201478  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:00.217162  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:00.217228  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:00.252219  303486 cri.go:89] found id: ""
	I0920 19:08:00.252247  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.252256  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:00.252263  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:00.252334  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:00.287244  303486 cri.go:89] found id: ""
	I0920 19:08:00.287283  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.287295  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:00.287302  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:00.287367  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:00.325785  303486 cri.go:89] found id: ""
	I0920 19:08:00.325818  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.325829  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:00.325839  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:00.325931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:00.359718  303486 cri.go:89] found id: ""
	I0920 19:08:00.359747  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.359757  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:00.359766  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:00.359847  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:00.399105  303486 cri.go:89] found id: ""
	I0920 19:08:00.399147  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.399156  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:00.399163  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:00.399227  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:00.433647  303486 cri.go:89] found id: ""
	I0920 19:08:00.433675  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.433683  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:00.433692  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:00.433756  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:00.467771  303486 cri.go:89] found id: ""
	I0920 19:08:00.467820  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.467832  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:00.467841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:00.467911  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:00.511320  303486 cri.go:89] found id: ""
	I0920 19:08:00.511363  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.511376  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:00.511392  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:00.511414  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:00.594669  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:00.594703  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:00.594723  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:00.672747  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:00.672800  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:00.710001  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:00.710049  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:00.760333  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:00.760378  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:03.274393  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:03.289260  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:03.289352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:03.327884  303486 cri.go:89] found id: ""
	I0920 19:08:03.327919  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.327932  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:03.327942  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:03.328015  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:03.367259  303486 cri.go:89] found id: ""
	I0920 19:08:03.367289  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.367297  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:03.367303  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:03.367361  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:03.405843  303486 cri.go:89] found id: ""
	I0920 19:08:03.405899  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.405932  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:03.405942  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:03.406056  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:03.441026  303486 cri.go:89] found id: ""
	I0920 19:08:03.441058  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.441069  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:03.441078  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:03.441147  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:03.477213  303486 cri.go:89] found id: ""
	I0920 19:08:03.477249  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.477261  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:03.477327  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:03.477415  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:03.515843  303486 cri.go:89] found id: ""
	I0920 19:08:03.515880  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.515888  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:03.515895  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:03.515945  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:03.566972  303486 cri.go:89] found id: ""
	I0920 19:08:03.567009  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.567020  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:03.567028  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:03.567097  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:03.616957  303486 cri.go:89] found id: ""
	I0920 19:08:03.617000  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.617013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:03.617029  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:03.617048  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:03.683140  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:03.683192  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:03.697225  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:03.697267  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:03.770430  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:03.770455  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:03.770478  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:03.848796  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:03.848836  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:01.284245  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:03.284437  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:01.964058  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:04.462786  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:03.607506  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.106008  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.387706  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:06.401600  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:06.401669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:06.437854  303486 cri.go:89] found id: ""
	I0920 19:08:06.437890  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.437917  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:06.437926  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:06.437993  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:06.472617  303486 cri.go:89] found id: ""
	I0920 19:08:06.472647  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.472655  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:06.472662  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:06.472718  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:06.510083  303486 cri.go:89] found id: ""
	I0920 19:08:06.510118  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.510131  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:06.510140  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:06.510212  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:06.546388  303486 cri.go:89] found id: ""
	I0920 19:08:06.546418  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.546427  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:06.546434  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:06.546485  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:06.584043  303486 cri.go:89] found id: ""
	I0920 19:08:06.584084  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.584096  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:06.584106  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:06.584182  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:06.622118  303486 cri.go:89] found id: ""
	I0920 19:08:06.622147  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.622155  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:06.622161  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:06.622217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:06.655513  303486 cri.go:89] found id: ""
	I0920 19:08:06.655552  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.655585  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:06.655593  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:06.655657  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:06.690286  303486 cri.go:89] found id: ""
	I0920 19:08:06.690324  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.690336  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:06.690350  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:06.690368  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:06.729229  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:06.729259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:06.780368  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:06.780411  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:06.794746  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:06.794782  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:06.866918  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:06.866944  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:06.866967  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:05.784123  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.284383  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.462855  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.466867  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:10.963736  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.106490  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:10.606291  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:09.451583  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:09.465111  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:09.465178  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:09.497679  303486 cri.go:89] found id: ""
	I0920 19:08:09.497713  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.497725  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:09.497733  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:09.497797  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:09.535297  303486 cri.go:89] found id: ""
	I0920 19:08:09.535334  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.535345  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:09.535353  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:09.535427  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:09.572449  303486 cri.go:89] found id: ""
	I0920 19:08:09.572482  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.572491  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:09.572498  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:09.572608  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:09.612672  303486 cri.go:89] found id: ""
	I0920 19:08:09.612697  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.612705  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:09.612711  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:09.612797  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:09.654366  303486 cri.go:89] found id: ""
	I0920 19:08:09.654399  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.654408  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:09.654415  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:09.654470  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:09.694825  303486 cri.go:89] found id: ""
	I0920 19:08:09.694858  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.694870  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:09.694878  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:09.694942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:09.731618  303486 cri.go:89] found id: ""
	I0920 19:08:09.731682  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.731693  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:09.731702  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:09.731775  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:09.766717  303486 cri.go:89] found id: ""
	I0920 19:08:09.766755  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.766765  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:09.766779  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:09.766794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:09.823505  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:09.823549  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:09.837622  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:09.837658  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:09.919105  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:09.919139  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:09.919156  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:10.000899  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:10.000943  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:12.542974  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:12.557265  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:12.557335  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:12.594099  303486 cri.go:89] found id: ""
	I0920 19:08:12.594126  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.594134  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:12.594140  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:12.594199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:12.627271  303486 cri.go:89] found id: ""
	I0920 19:08:12.627301  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.627308  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:12.627314  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:12.627366  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:12.661225  303486 cri.go:89] found id: ""
	I0920 19:08:12.661256  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.661265  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:12.661272  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:12.661332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:12.701381  303486 cri.go:89] found id: ""
	I0920 19:08:12.701424  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.701437  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:12.701447  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:12.701524  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:12.739189  303486 cri.go:89] found id: ""
	I0920 19:08:12.739227  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.739235  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:12.739246  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:12.739299  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:12.780931  303486 cri.go:89] found id: ""
	I0920 19:08:12.780958  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.781055  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:12.781068  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:12.781124  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:12.818097  303486 cri.go:89] found id: ""
	I0920 19:08:12.818137  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.818150  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:12.818161  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:12.818294  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:12.852925  303486 cri.go:89] found id: ""
	I0920 19:08:12.852957  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.852965  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:12.852975  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:12.852990  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:12.924746  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:12.924774  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:12.924791  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:13.005668  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:13.005718  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:13.044327  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:13.044359  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:13.094788  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:13.094833  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:10.284510  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:12.783546  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:12.964694  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.463615  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:13.105052  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.604922  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.611965  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:15.625857  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:15.625960  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:15.662138  303486 cri.go:89] found id: ""
	I0920 19:08:15.662169  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.662177  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:15.662184  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:15.662261  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:15.696000  303486 cri.go:89] found id: ""
	I0920 19:08:15.696067  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.696100  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:15.696115  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:15.696234  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:15.735594  303486 cri.go:89] found id: ""
	I0920 19:08:15.735625  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.735633  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:15.735640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:15.735699  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:15.774666  303486 cri.go:89] found id: ""
	I0920 19:08:15.774693  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.774703  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:15.774712  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:15.774777  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:15.810754  303486 cri.go:89] found id: ""
	I0920 19:08:15.810799  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.810811  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:15.810820  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:15.810884  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:15.846709  303486 cri.go:89] found id: ""
	I0920 19:08:15.846739  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.846748  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:15.846757  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:15.846819  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:15.880798  303486 cri.go:89] found id: ""
	I0920 19:08:15.880825  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.880833  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:15.880839  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:15.880895  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:15.915119  303486 cri.go:89] found id: ""
	I0920 19:08:15.915150  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.915159  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:15.915170  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:15.915186  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:15.966048  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:15.966087  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:15.979287  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:15.979322  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:16.052129  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:16.052163  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:16.052180  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:16.137743  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:16.137788  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:18.678389  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:18.693073  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:18.693152  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:18.734909  303486 cri.go:89] found id: ""
	I0920 19:08:18.734943  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.734954  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:18.734962  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:18.735028  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:18.773472  303486 cri.go:89] found id: ""
	I0920 19:08:18.773506  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.773517  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:18.773525  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:18.773620  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:18.812184  303486 cri.go:89] found id: ""
	I0920 19:08:18.812218  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.812228  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:18.812236  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:18.812305  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:18.846569  303486 cri.go:89] found id: ""
	I0920 19:08:18.846608  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.846619  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:18.846627  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:18.846700  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:18.881794  303486 cri.go:89] found id: ""
	I0920 19:08:18.881836  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.881862  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:18.881870  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:18.881943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:18.919657  303486 cri.go:89] found id: ""
	I0920 19:08:18.919688  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.919698  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:18.919708  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:18.919774  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:14.784734  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:17.283590  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:19.284056  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:17.962913  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:20.462190  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:18.105736  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:20.106314  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:22.605231  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:18.955117  303486 cri.go:89] found id: ""
	I0920 19:08:18.955146  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.955157  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:18.955166  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:18.955243  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:18.992389  303486 cri.go:89] found id: ""
	I0920 19:08:18.992422  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.992430  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:18.992444  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:18.992460  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:19.070374  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:19.070417  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:19.110793  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:19.110825  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:19.163783  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:19.163830  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:19.177348  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:19.177387  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:19.249469  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:21.749644  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:21.764920  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:21.765006  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:21.803443  303486 cri.go:89] found id: ""
	I0920 19:08:21.803473  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.803481  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:21.803489  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:21.803545  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:21.844552  303486 cri.go:89] found id: ""
	I0920 19:08:21.844582  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.844593  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:21.844601  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:21.844672  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:21.878979  303486 cri.go:89] found id: ""
	I0920 19:08:21.879007  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.879017  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:21.879029  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:21.879099  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:21.915745  303486 cri.go:89] found id: ""
	I0920 19:08:21.915773  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.915783  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:21.915794  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:21.915865  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:21.948999  303486 cri.go:89] found id: ""
	I0920 19:08:21.949031  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.949043  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:21.949052  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:21.949118  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:21.984238  303486 cri.go:89] found id: ""
	I0920 19:08:21.984269  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.984277  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:21.984284  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:21.984357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:22.018581  303486 cri.go:89] found id: ""
	I0920 19:08:22.018610  303486 logs.go:276] 0 containers: []
	W0920 19:08:22.018620  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:22.018628  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:22.018694  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:22.051868  303486 cri.go:89] found id: ""
	I0920 19:08:22.051903  303486 logs.go:276] 0 containers: []
	W0920 19:08:22.051913  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:22.051925  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:22.051942  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:22.106711  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:22.106756  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:22.120910  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:22.120940  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:22.196564  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:22.196591  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:22.196608  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:22.275235  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:22.275288  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:21.785129  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.284359  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:22.463122  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.962694  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:25.105050  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:27.105237  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.821956  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:24.836846  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:24.836918  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:24.878371  303486 cri.go:89] found id: ""
	I0920 19:08:24.878398  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.878406  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:24.878413  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:24.878464  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:24.911450  303486 cri.go:89] found id: ""
	I0920 19:08:24.911480  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.911489  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:24.911497  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:24.911590  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:24.949248  303486 cri.go:89] found id: ""
	I0920 19:08:24.949281  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.949289  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:24.949298  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:24.949352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:24.987899  303486 cri.go:89] found id: ""
	I0920 19:08:24.987932  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.987939  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:24.987948  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:24.988011  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:25.020589  303486 cri.go:89] found id: ""
	I0920 19:08:25.020627  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.020638  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:25.020646  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:25.020701  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:25.060223  303486 cri.go:89] found id: ""
	I0920 19:08:25.060250  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.060258  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:25.060266  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:25.060331  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:25.099111  303486 cri.go:89] found id: ""
	I0920 19:08:25.099141  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.099151  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:25.099160  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:25.099242  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:25.136055  303486 cri.go:89] found id: ""
	I0920 19:08:25.136089  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.136098  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:25.136118  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:25.136135  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:25.187619  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:25.187658  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:25.200983  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:25.201016  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:25.270746  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:25.270778  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:25.270795  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:25.350009  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:25.350050  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:27.889864  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:27.903156  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:27.903231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:27.935087  303486 cri.go:89] found id: ""
	I0920 19:08:27.935118  303486 logs.go:276] 0 containers: []
	W0920 19:08:27.935128  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:27.935138  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:27.935199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:27.970451  303486 cri.go:89] found id: ""
	I0920 19:08:27.970479  303486 logs.go:276] 0 containers: []
	W0920 19:08:27.970487  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:27.970494  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:27.970545  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:28.004931  303486 cri.go:89] found id: ""
	I0920 19:08:28.004980  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.004992  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:28.005002  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:28.005068  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:28.039438  303486 cri.go:89] found id: ""
	I0920 19:08:28.039470  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.039478  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:28.039485  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:28.039535  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:28.076023  303486 cri.go:89] found id: ""
	I0920 19:08:28.076050  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.076058  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:28.076064  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:28.076131  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:28.114726  303486 cri.go:89] found id: ""
	I0920 19:08:28.114761  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.114772  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:28.114781  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:28.114846  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:28.150790  303486 cri.go:89] found id: ""
	I0920 19:08:28.150822  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.150832  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:28.150841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:28.150908  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:28.186576  303486 cri.go:89] found id: ""
	I0920 19:08:28.186606  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.186614  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:28.186626  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:28.186648  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:28.240939  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:28.240984  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:28.255267  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:28.255304  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:28.327773  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:28.327797  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:28.327809  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:28.418011  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:28.418055  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:26.785099  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:29.284297  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:26.962825  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:28.963261  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:30.963575  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:29.605453  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:32.104848  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:30.962398  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:30.975385  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:30.975471  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:31.009898  303486 cri.go:89] found id: ""
	I0920 19:08:31.009952  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.009964  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:31.009973  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:31.010044  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:31.043639  303486 cri.go:89] found id: ""
	I0920 19:08:31.043670  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.043679  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:31.043689  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:31.043758  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:31.077709  303486 cri.go:89] found id: ""
	I0920 19:08:31.077745  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.077753  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:31.077759  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:31.077818  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:31.111117  303486 cri.go:89] found id: ""
	I0920 19:08:31.111150  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.111160  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:31.111168  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:31.111234  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:31.143888  303486 cri.go:89] found id: ""
	I0920 19:08:31.143921  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.143933  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:31.143942  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:31.144014  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:31.176694  303486 cri.go:89] found id: ""
	I0920 19:08:31.176729  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.176742  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:31.176751  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:31.176815  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:31.213794  303486 cri.go:89] found id: ""
	I0920 19:08:31.213832  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.213844  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:31.213854  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:31.213946  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:31.250160  303486 cri.go:89] found id: ""
	I0920 19:08:31.250219  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.250230  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:31.250244  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:31.250261  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:31.263748  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:31.263784  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:31.337719  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:31.337749  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:31.337762  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:31.420398  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:31.420446  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:31.459992  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:31.460030  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:31.284818  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:33.783288  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:33.462900  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:35.463122  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:34.105758  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:36.604917  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:34.014229  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:34.028129  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:34.028194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:34.060793  303486 cri.go:89] found id: ""
	I0920 19:08:34.060832  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.060850  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:34.060859  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:34.060919  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:34.094440  303486 cri.go:89] found id: ""
	I0920 19:08:34.094467  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.094475  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:34.094481  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:34.094544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:34.128824  303486 cri.go:89] found id: ""
	I0920 19:08:34.128861  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.128872  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:34.128881  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:34.128948  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:34.160861  303486 cri.go:89] found id: ""
	I0920 19:08:34.160894  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.160903  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:34.160911  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:34.160967  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:34.196897  303486 cri.go:89] found id: ""
	I0920 19:08:34.196933  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.196952  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:34.196958  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:34.197020  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:34.229083  303486 cri.go:89] found id: ""
	I0920 19:08:34.229115  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.229125  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:34.229134  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:34.229205  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:34.261877  303486 cri.go:89] found id: ""
	I0920 19:08:34.261922  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.261933  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:34.261941  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:34.262008  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:34.296145  303486 cri.go:89] found id: ""
	I0920 19:08:34.296177  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.296189  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:34.296199  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:34.296214  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:34.361598  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:34.361624  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:34.361641  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:34.441067  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:34.441110  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:34.483333  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:34.483362  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:34.538345  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:34.538388  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:37.053155  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:37.067157  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:37.067230  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:37.101432  303486 cri.go:89] found id: ""
	I0920 19:08:37.101466  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.101476  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:37.101485  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:37.101550  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:37.134375  303486 cri.go:89] found id: ""
	I0920 19:08:37.134408  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.134416  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:37.134423  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:37.134487  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:37.167049  303486 cri.go:89] found id: ""
	I0920 19:08:37.167087  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.167099  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:37.167107  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:37.167175  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:37.209358  303486 cri.go:89] found id: ""
	I0920 19:08:37.209387  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.209397  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:37.209405  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:37.209470  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:37.243227  303486 cri.go:89] found id: ""
	I0920 19:08:37.243261  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.243272  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:37.243281  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:37.243332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:37.276546  303486 cri.go:89] found id: ""
	I0920 19:08:37.276596  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.276607  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:37.276626  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:37.276688  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:37.311233  303486 cri.go:89] found id: ""
	I0920 19:08:37.311268  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.311279  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:37.311287  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:37.311352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:37.349970  303486 cri.go:89] found id: ""
	I0920 19:08:37.350003  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.350013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:37.350025  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:37.350041  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:37.399405  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:37.399445  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:37.423764  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:37.423800  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:37.498797  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:37.498826  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:37.498841  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:37.575521  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:37.575566  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:35.783897  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:37.784496  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:37.463224  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:39.463445  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:38.605444  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:40.606712  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:40.118650  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:40.131967  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:40.132051  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:40.165313  303486 cri.go:89] found id: ""
	I0920 19:08:40.165349  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.165358  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:40.165366  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:40.165439  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:40.197194  303486 cri.go:89] found id: ""
	I0920 19:08:40.197223  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.197232  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:40.197238  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:40.197289  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:40.236769  303486 cri.go:89] found id: ""
	I0920 19:08:40.236800  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.236810  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:40.236819  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:40.236888  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:40.271960  303486 cri.go:89] found id: ""
	I0920 19:08:40.271984  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.271992  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:40.271998  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:40.272049  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:40.307874  303486 cri.go:89] found id: ""
	I0920 19:08:40.307909  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.307917  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:40.307923  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:40.307982  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:40.342128  303486 cri.go:89] found id: ""
	I0920 19:08:40.342160  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.342168  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:40.342175  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:40.342233  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:40.381493  303486 cri.go:89] found id: ""
	I0920 19:08:40.381529  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.381542  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:40.381551  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:40.381617  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:40.415164  303486 cri.go:89] found id: ""
	I0920 19:08:40.415199  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.415211  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:40.415222  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:40.415238  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:40.488306  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:40.488330  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:40.488350  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:40.567193  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:40.567235  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:40.607256  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:40.607287  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:40.659504  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:40.659542  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:43.174043  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:43.188690  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:43.188790  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:43.227223  303486 cri.go:89] found id: ""
	I0920 19:08:43.227251  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.227259  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:43.227267  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:43.227356  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:43.260099  303486 cri.go:89] found id: ""
	I0920 19:08:43.260128  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.260137  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:43.260143  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:43.260195  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:43.297846  303486 cri.go:89] found id: ""
	I0920 19:08:43.297875  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.297886  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:43.297894  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:43.297980  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:43.334026  303486 cri.go:89] found id: ""
	I0920 19:08:43.334061  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.334070  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:43.334078  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:43.334147  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:43.367765  303486 cri.go:89] found id: ""
	I0920 19:08:43.367795  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.367806  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:43.367814  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:43.367890  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:43.402722  303486 cri.go:89] found id: ""
	I0920 19:08:43.402766  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.402778  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:43.402787  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:43.402852  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:43.439643  303486 cri.go:89] found id: ""
	I0920 19:08:43.439674  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.439682  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:43.439690  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:43.439742  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:43.475931  303486 cri.go:89] found id: ""
	I0920 19:08:43.475965  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.475976  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:43.475991  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:43.476006  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:43.545694  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:43.545725  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:43.545739  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:43.627493  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:43.627549  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:43.667758  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:43.667794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:43.721803  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:43.721851  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:40.285524  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:42.784336  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:41.962300  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:43.963712  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:45.963766  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:43.105271  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:45.105737  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:47.604667  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:46.237499  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:46.250854  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:46.250925  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:46.288918  303486 cri.go:89] found id: ""
	I0920 19:08:46.288950  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.288957  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:46.288964  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:46.289026  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:46.321113  303486 cri.go:89] found id: ""
	I0920 19:08:46.321149  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.321159  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:46.321168  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:46.321239  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:46.359606  303486 cri.go:89] found id: ""
	I0920 19:08:46.359643  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.359652  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:46.359659  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:46.359729  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:46.397059  303486 cri.go:89] found id: ""
	I0920 19:08:46.397089  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.397098  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:46.397104  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:46.397174  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:46.438224  303486 cri.go:89] found id: ""
	I0920 19:08:46.438261  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.438271  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:46.438279  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:46.438355  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:46.476933  303486 cri.go:89] found id: ""
	I0920 19:08:46.476963  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.476973  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:46.476981  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:46.477047  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:46.522115  303486 cri.go:89] found id: ""
	I0920 19:08:46.522150  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.522160  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:46.522167  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:46.522236  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:46.555508  303486 cri.go:89] found id: ""
	I0920 19:08:46.555541  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.555551  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:46.555565  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:46.555580  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:46.632314  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:46.632358  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:46.672381  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:46.672420  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:46.725777  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:46.725835  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:46.739924  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:46.739959  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:46.816667  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:45.284171  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:47.284420  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.284798  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:48.462088  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:50.463100  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.606279  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:52.105103  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.317620  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:49.331792  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:49.331872  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:49.365417  303486 cri.go:89] found id: ""
	I0920 19:08:49.365457  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.365470  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:49.365479  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:49.365543  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:49.399422  303486 cri.go:89] found id: ""
	I0920 19:08:49.399455  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.399465  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:49.399474  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:49.399532  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:49.433040  303486 cri.go:89] found id: ""
	I0920 19:08:49.433069  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.433076  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:49.433082  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:49.433149  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:49.466865  303486 cri.go:89] found id: ""
	I0920 19:08:49.466897  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.466909  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:49.466917  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:49.466986  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:49.499542  303486 cri.go:89] found id: ""
	I0920 19:08:49.499574  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.499583  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:49.499589  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:49.499639  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:49.534310  303486 cri.go:89] found id: ""
	I0920 19:08:49.534338  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.534346  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:49.534353  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:49.534411  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:49.580271  303486 cri.go:89] found id: ""
	I0920 19:08:49.580297  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.580305  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:49.580312  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:49.580385  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:49.626519  303486 cri.go:89] found id: ""
	I0920 19:08:49.626554  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.626562  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:49.626572  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:49.626587  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:49.682923  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:49.682963  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:49.695859  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:49.695895  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:49.767626  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:49.767669  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:49.767697  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:49.849570  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:49.849614  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:52.387653  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:52.400693  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:52.400757  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:52.434320  303486 cri.go:89] found id: ""
	I0920 19:08:52.434358  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.434369  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:52.434381  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:52.434448  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:52.469167  303486 cri.go:89] found id: ""
	I0920 19:08:52.469202  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.469214  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:52.469222  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:52.469291  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:52.504241  303486 cri.go:89] found id: ""
	I0920 19:08:52.504287  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.504295  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:52.504304  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:52.504367  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:52.539573  303486 cri.go:89] found id: ""
	I0920 19:08:52.539604  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.539613  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:52.539619  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:52.539697  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:52.573794  303486 cri.go:89] found id: ""
	I0920 19:08:52.573821  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.573829  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:52.573834  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:52.573931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:52.607628  303486 cri.go:89] found id: ""
	I0920 19:08:52.607660  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.607670  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:52.607676  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:52.607738  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:52.639088  303486 cri.go:89] found id: ""
	I0920 19:08:52.639121  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.639132  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:52.639140  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:52.639204  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:52.673585  303486 cri.go:89] found id: ""
	I0920 19:08:52.673624  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.673636  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:52.673650  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:52.673667  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:52.726463  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:52.726504  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:52.739520  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:52.739553  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:52.820610  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:52.820638  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:52.820653  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:52.898567  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:52.898612  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:51.783687  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:53.784963  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:52.962326  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:54.963069  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:54.105159  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:56.604367  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:55.440875  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:55.454526  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:55.454602  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:55.490616  303486 cri.go:89] found id: ""
	I0920 19:08:55.490655  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.490664  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:55.490671  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:55.490735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:55.530256  303486 cri.go:89] found id: ""
	I0920 19:08:55.530287  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.530296  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:55.530304  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:55.530357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:55.565209  303486 cri.go:89] found id: ""
	I0920 19:08:55.565242  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.565253  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:55.565260  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:55.565319  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:55.599522  303486 cri.go:89] found id: ""
	I0920 19:08:55.599553  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.599563  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:55.599571  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:55.599634  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:55.634662  303486 cri.go:89] found id: ""
	I0920 19:08:55.634692  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.634700  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:55.634707  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:55.634759  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:55.670326  303486 cri.go:89] found id: ""
	I0920 19:08:55.670361  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.670372  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:55.670379  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:55.670434  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:55.702589  303486 cri.go:89] found id: ""
	I0920 19:08:55.702617  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.702625  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:55.702632  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:55.702694  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:55.737615  303486 cri.go:89] found id: ""
	I0920 19:08:55.737643  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.737653  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:55.737667  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:55.737682  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:55.816827  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:55.816873  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:55.855521  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:55.855550  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:55.905002  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:55.905047  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:55.918292  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:55.918324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:55.987445  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:58.488566  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:58.503898  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:58.504001  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:58.539089  303486 cri.go:89] found id: ""
	I0920 19:08:58.539117  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.539127  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:58.539135  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:58.539199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:58.576432  303486 cri.go:89] found id: ""
	I0920 19:08:58.576459  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.576467  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:58.576473  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:58.576542  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:58.613779  303486 cri.go:89] found id: ""
	I0920 19:08:58.613814  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.613825  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:58.613833  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:58.613932  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:58.648717  303486 cri.go:89] found id: ""
	I0920 19:08:58.648757  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.648768  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:58.648777  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:58.648845  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:58.681533  303486 cri.go:89] found id: ""
	I0920 19:08:58.681568  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.681585  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:58.681593  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:58.681647  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:58.714833  303486 cri.go:89] found id: ""
	I0920 19:08:58.714867  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.714877  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:58.714886  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:58.714951  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:58.755939  303486 cri.go:89] found id: ""
	I0920 19:08:58.755972  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.755980  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:58.755986  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:58.756037  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:58.793195  303486 cri.go:89] found id: ""
	I0920 19:08:58.793229  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.793240  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:58.793252  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:58.793267  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:58.807903  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:58.807939  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:58.873993  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:58.874022  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:58.874042  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:56.283846  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.286474  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:56.963398  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.963513  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.606087  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:01.106199  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.955201  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:58.955249  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:58.994230  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:58.994265  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:01.548403  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:01.561467  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:01.561541  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:01.595339  303486 cri.go:89] found id: ""
	I0920 19:09:01.595374  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.595382  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:01.595388  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:01.595463  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:01.631995  303486 cri.go:89] found id: ""
	I0920 19:09:01.632033  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.632043  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:01.632051  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:01.632119  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:01.667556  303486 cri.go:89] found id: ""
	I0920 19:09:01.667586  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.667596  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:01.667604  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:01.667669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:01.702678  303486 cri.go:89] found id: ""
	I0920 19:09:01.702708  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.702716  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:01.702723  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:01.702786  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:01.739953  303486 cri.go:89] found id: ""
	I0920 19:09:01.739987  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.739999  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:01.740008  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:01.740075  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:01.774188  303486 cri.go:89] found id: ""
	I0920 19:09:01.774222  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.774239  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:01.774249  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:01.774317  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:01.808885  303486 cri.go:89] found id: ""
	I0920 19:09:01.808916  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.808927  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:01.808935  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:01.808997  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:01.842357  303486 cri.go:89] found id: ""
	I0920 19:09:01.842394  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.842404  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:01.842417  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:01.842433  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:01.881750  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:01.881782  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:01.932190  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:01.932236  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:01.946305  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:01.946337  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:02.020099  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:02.020127  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:02.020141  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:00.784428  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.284109  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:01.462613  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.962360  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:05.963735  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.605623  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:06.104994  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:04.601186  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:04.614292  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:04.614374  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:04.649579  303486 cri.go:89] found id: ""
	I0920 19:09:04.649611  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.649619  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:04.649625  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:04.649683  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:04.684039  303486 cri.go:89] found id: ""
	I0920 19:09:04.684076  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.684094  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:04.684108  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:04.684182  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:04.729130  303486 cri.go:89] found id: ""
	I0920 19:09:04.729166  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.729177  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:04.729186  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:04.729244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:04.762646  303486 cri.go:89] found id: ""
	I0920 19:09:04.762682  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.762690  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:04.762697  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:04.762761  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:04.797492  303486 cri.go:89] found id: ""
	I0920 19:09:04.797518  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.797527  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:04.797533  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:04.797588  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:04.832780  303486 cri.go:89] found id: ""
	I0920 19:09:04.832813  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.832823  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:04.832831  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:04.832893  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:04.868489  303486 cri.go:89] found id: ""
	I0920 19:09:04.868526  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.868537  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:04.868546  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:04.868613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:04.901115  303486 cri.go:89] found id: ""
	I0920 19:09:04.901156  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.901164  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:04.901174  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:04.901186  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:04.952435  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:04.952482  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:04.966450  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:04.966481  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:05.035951  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:05.035977  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:05.035991  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:05.120961  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:05.121016  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:07.659497  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:07.672989  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:07.673062  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:07.708200  303486 cri.go:89] found id: ""
	I0920 19:09:07.708236  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.708247  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:07.708256  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:07.708320  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:07.742116  303486 cri.go:89] found id: ""
	I0920 19:09:07.742156  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.742166  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:07.742175  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:07.742231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:07.774369  303486 cri.go:89] found id: ""
	I0920 19:09:07.774401  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.774410  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:07.774419  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:07.774485  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:07.811727  303486 cri.go:89] found id: ""
	I0920 19:09:07.811756  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.811763  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:07.811769  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:07.811825  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:07.849613  303486 cri.go:89] found id: ""
	I0920 19:09:07.849646  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.849655  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:07.849661  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:07.849715  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:07.884643  303486 cri.go:89] found id: ""
	I0920 19:09:07.884679  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.884690  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:07.884698  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:07.884770  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:07.920240  303486 cri.go:89] found id: ""
	I0920 19:09:07.920272  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.920283  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:07.920292  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:07.920371  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:07.954729  303486 cri.go:89] found id: ""
	I0920 19:09:07.954768  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.954780  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:07.954792  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:07.954808  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:08.008679  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:08.008732  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:08.023637  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:08.023673  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:08.097298  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:08.097325  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:08.097340  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:08.173404  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:08.173444  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:05.783765  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.283642  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.462994  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.965062  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.106350  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.605138  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:12.605390  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.718224  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:10.732520  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:10.732593  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:10.766764  303486 cri.go:89] found id: ""
	I0920 19:09:10.766800  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.766811  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:10.766821  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:10.766887  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:10.800039  303486 cri.go:89] found id: ""
	I0920 19:09:10.800077  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.800087  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:10.800095  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:10.800157  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:10.833931  303486 cri.go:89] found id: ""
	I0920 19:09:10.833969  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.833979  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:10.833985  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:10.834057  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:10.867714  303486 cri.go:89] found id: ""
	I0920 19:09:10.867752  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.867763  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:10.867771  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:10.867840  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:10.903026  303486 cri.go:89] found id: ""
	I0920 19:09:10.903060  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.903068  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:10.903075  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:10.903131  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:10.936968  303486 cri.go:89] found id: ""
	I0920 19:09:10.937002  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.937013  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:10.937021  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:10.937089  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:10.973055  303486 cri.go:89] found id: ""
	I0920 19:09:10.973079  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.973087  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:10.973093  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:10.973145  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:11.010283  303486 cri.go:89] found id: ""
	I0920 19:09:11.010310  303486 logs.go:276] 0 containers: []
	W0920 19:09:11.010321  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:11.010333  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:11.010352  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:11.025202  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:11.025239  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:11.104268  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:11.104295  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:11.104312  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:11.182281  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:11.182326  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:11.219296  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:11.219335  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:13.767833  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:13.780805  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:13.780890  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:13.822288  303486 cri.go:89] found id: ""
	I0920 19:09:13.822317  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.822327  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:13.822334  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:13.822388  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:13.862068  303486 cri.go:89] found id: ""
	I0920 19:09:13.862098  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.862106  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:13.862112  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:13.862163  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:13.898497  303486 cri.go:89] found id: ""
	I0920 19:09:13.898529  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.898540  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:13.898550  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:13.898618  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:13.935994  303486 cri.go:89] found id: ""
	I0920 19:09:13.936022  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.936030  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:13.936038  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:13.936105  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:10.277863  302869 pod_ready.go:82] duration metric: took 4m0.000569658s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" ...
	E0920 19:09:10.277919  302869 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 19:09:10.277965  302869 pod_ready.go:39] duration metric: took 4m13.052343801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:10.278003  302869 kubeadm.go:597] duration metric: took 4m21.10965758s to restartPrimaryControlPlane
	W0920 19:09:10.278125  302869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:09:10.278168  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:09:13.462752  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:15.962371  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:14.605565  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:17.112026  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:13.973764  303486 cri.go:89] found id: ""
	I0920 19:09:13.973801  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.973812  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:13.973820  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:13.973898  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:14.009443  303486 cri.go:89] found id: ""
	I0920 19:09:14.009482  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.009494  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:14.009502  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:14.009577  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:14.045593  303486 cri.go:89] found id: ""
	I0920 19:09:14.045629  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.045639  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:14.045648  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:14.045714  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:14.086273  303486 cri.go:89] found id: ""
	I0920 19:09:14.086310  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.086319  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:14.086330  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:14.086343  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:14.140730  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:14.140772  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:14.154198  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:14.154232  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:14.224716  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:14.224739  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:14.224754  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:14.302625  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:14.302665  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:16.840816  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:16.854905  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:16.855002  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:16.892994  303486 cri.go:89] found id: ""
	I0920 19:09:16.893028  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.893038  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:16.893045  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:16.893103  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:16.931265  303486 cri.go:89] found id: ""
	I0920 19:09:16.931293  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.931307  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:16.931313  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:16.931364  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:16.970085  303486 cri.go:89] found id: ""
	I0920 19:09:16.970119  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.970129  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:16.970138  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:16.970189  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:17.003163  303486 cri.go:89] found id: ""
	I0920 19:09:17.003194  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.003206  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:17.003214  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:17.003282  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:17.040577  303486 cri.go:89] found id: ""
	I0920 19:09:17.040618  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.040633  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:17.040640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:17.040706  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:17.073946  303486 cri.go:89] found id: ""
	I0920 19:09:17.073986  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.073995  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:17.074006  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:17.074066  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:17.111569  303486 cri.go:89] found id: ""
	I0920 19:09:17.111636  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.111648  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:17.111664  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:17.111730  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:17.148005  303486 cri.go:89] found id: ""
	I0920 19:09:17.148034  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.148044  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:17.148056  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:17.148072  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:17.222281  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:17.222306  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:17.222324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:17.297577  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:17.297619  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:17.334709  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:17.334740  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:17.386279  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:17.386320  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:17.962802  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.963289  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.605813  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:22.105024  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.901017  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:19.914489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:19.914571  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:19.955023  303486 cri.go:89] found id: ""
	I0920 19:09:19.955051  303486 logs.go:276] 0 containers: []
	W0920 19:09:19.955060  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:19.955067  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:19.955125  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:19.995536  303486 cri.go:89] found id: ""
	I0920 19:09:19.995575  303486 logs.go:276] 0 containers: []
	W0920 19:09:19.995585  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:19.995594  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:19.995650  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:20.031153  303486 cri.go:89] found id: ""
	I0920 19:09:20.031181  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.031190  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:20.031198  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:20.031266  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:20.064145  303486 cri.go:89] found id: ""
	I0920 19:09:20.064174  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.064190  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:20.064199  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:20.064256  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:20.098399  303486 cri.go:89] found id: ""
	I0920 19:09:20.098429  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.098440  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:20.098449  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:20.098505  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:20.138805  303486 cri.go:89] found id: ""
	I0920 19:09:20.138833  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.138843  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:20.138852  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:20.138914  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:20.183291  303486 cri.go:89] found id: ""
	I0920 19:09:20.183322  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.183333  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:20.183342  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:20.183406  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:20.220344  303486 cri.go:89] found id: ""
	I0920 19:09:20.220378  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.220396  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:20.220409  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:20.220426  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:20.271043  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:20.271086  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:20.286724  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:20.286754  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:20.358233  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:20.358273  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:20.358291  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:20.439511  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:20.439568  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:22.982570  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:22.995384  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:22.995475  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:23.029031  303486 cri.go:89] found id: ""
	I0920 19:09:23.029069  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.029081  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:23.029091  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:23.029166  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:23.063291  303486 cri.go:89] found id: ""
	I0920 19:09:23.063325  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.063336  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:23.063343  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:23.063413  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:23.097494  303486 cri.go:89] found id: ""
	I0920 19:09:23.097525  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.097536  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:23.097545  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:23.097610  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:23.132169  303486 cri.go:89] found id: ""
	I0920 19:09:23.132197  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.132204  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:23.132211  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:23.132276  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:23.173651  303486 cri.go:89] found id: ""
	I0920 19:09:23.173682  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.173692  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:23.173700  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:23.173763  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:23.206098  303486 cri.go:89] found id: ""
	I0920 19:09:23.206135  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.206146  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:23.206155  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:23.206216  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:23.245422  303486 cri.go:89] found id: ""
	I0920 19:09:23.245466  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.245479  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:23.245489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:23.245569  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:23.280326  303486 cri.go:89] found id: ""
	I0920 19:09:23.280357  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.280365  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:23.280376  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:23.280390  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:23.330986  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:23.331034  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:23.344751  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:23.344788  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:23.420213  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:23.420239  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:23.420255  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:23.500449  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:23.500491  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:22.462590  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:24.962516  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:24.105502  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:26.110930  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:26.040050  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:26.056377  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:26.056463  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:26.094122  303486 cri.go:89] found id: ""
	I0920 19:09:26.094160  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.094170  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:26.094179  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:26.094246  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:26.129383  303486 cri.go:89] found id: ""
	I0920 19:09:26.129408  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.129415  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:26.129422  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:26.129472  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:26.163579  303486 cri.go:89] found id: ""
	I0920 19:09:26.163611  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.163621  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:26.163630  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:26.163699  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:26.208026  303486 cri.go:89] found id: ""
	I0920 19:09:26.208057  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.208065  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:26.208071  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:26.208138  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:26.245375  303486 cri.go:89] found id: ""
	I0920 19:09:26.245409  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.245421  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:26.245438  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:26.245500  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:26.280283  303486 cri.go:89] found id: ""
	I0920 19:09:26.280315  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.280326  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:26.280336  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:26.280397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:26.314621  303486 cri.go:89] found id: ""
	I0920 19:09:26.314657  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.314670  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:26.314679  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:26.314773  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:26.347667  303486 cri.go:89] found id: ""
	I0920 19:09:26.347694  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.347701  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:26.347711  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:26.347722  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:26.397221  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:26.397259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:26.411126  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:26.411157  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:26.479631  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:26.479657  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:26.479686  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:26.555439  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:26.555477  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:26.962845  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:28.963560  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:28.605949  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:30.612349  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:32.104187  303063 pod_ready.go:82] duration metric: took 4m0.005608637s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	E0920 19:09:32.104213  303063 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 19:09:32.104224  303063 pod_ready.go:39] duration metric: took 4m5.679030104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:32.104241  303063 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:09:32.104273  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:32.104327  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:32.151755  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:32.151778  303063 cri.go:89] found id: ""
	I0920 19:09:32.151787  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:32.151866  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.157358  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:32.157426  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:32.201227  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:32.201255  303063 cri.go:89] found id: ""
	I0920 19:09:32.201263  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:32.201327  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.206508  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:32.206604  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:32.243509  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:32.243533  303063 cri.go:89] found id: ""
	I0920 19:09:32.243542  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:32.243595  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.247764  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:32.247836  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:32.283590  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:32.283627  303063 cri.go:89] found id: ""
	I0920 19:09:32.283637  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:32.283727  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.287826  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:32.287893  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:32.329071  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:32.329111  303063 cri.go:89] found id: ""
	I0920 19:09:32.329123  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:32.329196  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.333152  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:32.333236  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:32.372444  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:32.372474  303063 cri.go:89] found id: ""
	I0920 19:09:32.372485  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:32.372548  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.376414  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:32.376494  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:32.412244  303063 cri.go:89] found id: ""
	I0920 19:09:32.412280  303063 logs.go:276] 0 containers: []
	W0920 19:09:32.412291  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:32.412299  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:32.412352  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:32.449451  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:32.449472  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:32.449477  303063 cri.go:89] found id: ""
	I0920 19:09:32.449485  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:32.449544  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.454960  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.459688  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:32.459720  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:32.599208  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:32.599241  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:32.656960  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:32.657000  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:32.703259  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:32.703308  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:32.769218  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:32.769260  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:29.096877  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:29.110081  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:29.110170  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:29.152570  303486 cri.go:89] found id: ""
	I0920 19:09:29.152598  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.152608  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:29.152616  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:29.152689  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:29.188596  303486 cri.go:89] found id: ""
	I0920 19:09:29.188627  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.188638  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:29.188645  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:29.188713  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:29.228789  303486 cri.go:89] found id: ""
	I0920 19:09:29.228831  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.228841  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:29.228850  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:29.228913  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:29.260013  303486 cri.go:89] found id: ""
	I0920 19:09:29.260040  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.260048  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:29.260054  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:29.260105  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:29.293373  303486 cri.go:89] found id: ""
	I0920 19:09:29.293401  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.293411  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:29.293418  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:29.293487  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:29.325860  303486 cri.go:89] found id: ""
	I0920 19:09:29.325898  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.325925  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:29.325935  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:29.326027  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:29.358873  303486 cri.go:89] found id: ""
	I0920 19:09:29.358909  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.358921  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:29.358930  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:29.358994  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:29.392029  303486 cri.go:89] found id: ""
	I0920 19:09:29.392057  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.392067  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:29.392080  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:29.392095  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:29.467460  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:29.467504  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:29.508258  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:29.508298  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:29.559238  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:29.559274  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:29.574233  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:29.574264  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:29.649318  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:32.150539  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:32.168442  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:32.168527  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:32.210069  303486 cri.go:89] found id: ""
	I0920 19:09:32.210103  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.210120  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:32.210129  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:32.210191  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:32.243468  303486 cri.go:89] found id: ""
	I0920 19:09:32.243501  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.243511  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:32.243519  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:32.243586  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:32.275958  303486 cri.go:89] found id: ""
	I0920 19:09:32.275988  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.275996  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:32.276003  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:32.276056  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:32.312560  303486 cri.go:89] found id: ""
	I0920 19:09:32.312598  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.312609  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:32.312620  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:32.312695  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:32.347157  303486 cri.go:89] found id: ""
	I0920 19:09:32.347185  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.347193  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:32.347200  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:32.347264  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:32.382787  303486 cri.go:89] found id: ""
	I0920 19:09:32.382820  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.382832  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:32.382841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:32.382898  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:32.416182  303486 cri.go:89] found id: ""
	I0920 19:09:32.416216  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.416226  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:32.416234  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:32.416297  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:32.448863  303486 cri.go:89] found id: ""
	I0920 19:09:32.448895  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.448906  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:32.448919  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:32.448934  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:32.501882  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:32.501934  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:32.517984  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:32.518014  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:32.588517  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:32.588547  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:32.588560  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:32.671869  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:32.671921  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:35.211780  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:35.225476  303486 kubeadm.go:597] duration metric: took 4m2.827297435s to restartPrimaryControlPlane
	W0920 19:09:35.225582  303486 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:09:35.225618  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:09:35.686956  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:35.701803  303486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:09:35.712572  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:09:35.722867  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:09:35.722894  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:09:35.722948  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:09:35.732295  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:09:35.732358  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:09:35.741569  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:09:35.750515  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:09:35.750577  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:09:35.760469  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:09:35.770207  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:09:35.770284  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:09:35.780121  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:09:35.789887  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:09:35.789974  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:09:35.800914  303486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:09:35.871635  303486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:09:35.871691  303486 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:09:36.021411  303486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:09:36.021565  303486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:09:36.021773  303486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:09:36.217540  303486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:09:31.462557  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:33.463284  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:35.964501  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:36.723149  302869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.444941441s)
	I0920 19:09:36.723244  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:36.740763  302869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:09:36.751727  302869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:09:36.762710  302869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:09:36.762736  302869 kubeadm.go:157] found existing configuration files:
	
	I0920 19:09:36.762793  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:09:36.773454  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:09:36.773536  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:09:36.784738  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:09:36.794740  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:09:36.794818  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:09:36.805727  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:09:36.818253  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:09:36.818329  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:09:36.831210  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:09:36.842838  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:09:36.842914  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:09:36.853306  302869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:09:36.903121  302869 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:09:36.903285  302869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:09:37.025789  302869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:09:37.025969  302869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:09:37.026110  302869 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:09:37.034613  302869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:09:36.219542  303486 out.go:235]   - Generating certificates and keys ...
	I0920 19:09:36.219684  303486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:09:36.219769  303486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:09:36.219892  303486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:09:36.219973  303486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:09:36.220090  303486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:09:36.220181  303486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:09:36.220302  303486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:09:36.220414  303486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:09:36.220530  303486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:09:36.220626  303486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:09:36.220691  303486 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:09:36.220767  303486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:09:36.377012  303486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:09:36.706154  303486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:09:36.907341  303486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:09:37.091990  303486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:09:37.122813  303486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:09:37.124422  303486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:09:37.124531  303486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:09:37.277461  303486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:09:33.294289  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:33.294346  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:33.362317  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:33.362364  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:33.375712  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:33.375747  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:33.411136  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:33.411168  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:33.445649  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:33.445690  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:33.478869  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:33.478898  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:33.529433  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:33.529480  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:33.570515  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:33.570560  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.107490  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:36.124979  303063 api_server.go:72] duration metric: took 4m17.429642296s to wait for apiserver process to appear ...
	I0920 19:09:36.125014  303063 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:09:36.125069  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:36.125145  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:36.181962  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:36.181990  303063 cri.go:89] found id: ""
	I0920 19:09:36.182001  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:36.182061  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.186792  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:36.186876  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:36.235963  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:36.235993  303063 cri.go:89] found id: ""
	I0920 19:09:36.236003  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:36.236066  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.241177  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:36.241321  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:36.288324  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.288353  303063 cri.go:89] found id: ""
	I0920 19:09:36.288361  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:36.288415  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.293328  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:36.293413  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:36.335126  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:36.335153  303063 cri.go:89] found id: ""
	I0920 19:09:36.335163  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:36.335226  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.339400  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:36.339470  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:36.375555  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:36.375582  303063 cri.go:89] found id: ""
	I0920 19:09:36.375592  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:36.375657  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.379679  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:36.379753  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:36.415398  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:36.415424  303063 cri.go:89] found id: ""
	I0920 19:09:36.415434  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:36.415495  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.420183  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:36.420260  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:36.462018  303063 cri.go:89] found id: ""
	I0920 19:09:36.462049  303063 logs.go:276] 0 containers: []
	W0920 19:09:36.462060  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:36.462068  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:36.462129  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:36.515520  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:36.515551  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:36.515557  303063 cri.go:89] found id: ""
	I0920 19:09:36.515567  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:36.515628  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.520140  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.524197  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:36.524222  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:36.589535  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:36.589570  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.628836  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:36.628865  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:36.667614  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:36.667654  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:37.164164  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:37.164222  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:37.253505  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:37.253550  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:37.272704  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:37.272742  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:37.315827  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:37.315869  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:37.360449  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:37.360479  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:37.428225  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:37.428270  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:37.469766  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:37.469795  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:37.524517  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:37.524553  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:37.652128  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:37.652162  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:37.036846  302869 out.go:235]   - Generating certificates and keys ...
	I0920 19:09:37.036956  302869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:09:37.037061  302869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:09:37.037194  302869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:09:37.037284  302869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:09:37.037386  302869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:09:37.037462  302869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:09:37.037546  302869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:09:37.037635  302869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:09:37.037734  302869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:09:37.037847  302869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:09:37.037918  302869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:09:37.037995  302869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:09:37.116270  302869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:09:37.615537  302869 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:09:37.907479  302869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:09:38.090167  302869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:09:38.209430  302869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:09:38.209780  302869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:09:38.212626  302869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:09:37.279714  303486 out.go:235]   - Booting up control plane ...
	I0920 19:09:37.279861  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:09:37.288448  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:09:37.289724  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:09:37.290822  303486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:09:37.294106  303486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:09:38.214873  302869 out.go:235]   - Booting up control plane ...
	I0920 19:09:38.214994  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:09:38.215102  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:09:38.215199  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:09:38.232798  302869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:09:38.238716  302869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:09:38.238784  302869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:09:38.370841  302869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:09:38.371037  302869 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:09:38.463252  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:40.463322  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:40.212781  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:09:40.217868  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 200:
	ok
	I0920 19:09:40.219021  303063 api_server.go:141] control plane version: v1.31.1
	I0920 19:09:40.219044  303063 api_server.go:131] duration metric: took 4.094023157s to wait for apiserver health ...
	I0920 19:09:40.219053  303063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:09:40.219077  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:40.219128  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:40.264337  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:40.264365  303063 cri.go:89] found id: ""
	I0920 19:09:40.264376  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:40.264434  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.270143  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:40.270222  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:40.321696  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:40.321723  303063 cri.go:89] found id: ""
	I0920 19:09:40.321733  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:40.321799  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.329068  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:40.329149  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:40.387241  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:40.387329  303063 cri.go:89] found id: ""
	I0920 19:09:40.387357  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:40.387427  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.392896  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:40.392975  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:40.429173  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:40.429200  303063 cri.go:89] found id: ""
	I0920 19:09:40.429210  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:40.429284  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.434102  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:40.434179  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:40.480569  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:40.480598  303063 cri.go:89] found id: ""
	I0920 19:09:40.480607  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:40.480669  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.485821  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:40.485935  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:40.531502  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:40.531543  303063 cri.go:89] found id: ""
	I0920 19:09:40.531554  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:40.531613  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.535699  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:40.535769  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:40.569788  303063 cri.go:89] found id: ""
	I0920 19:09:40.569823  303063 logs.go:276] 0 containers: []
	W0920 19:09:40.569835  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:40.569842  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:40.569928  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:40.604668  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:40.604703  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:40.604710  303063 cri.go:89] found id: ""
	I0920 19:09:40.604721  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:40.604790  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.608948  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.613331  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:40.613360  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:40.657680  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:40.657726  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:40.698087  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:40.698125  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:40.753643  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:40.753683  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:40.791741  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:40.791790  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:41.176451  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:41.176497  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:41.226352  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:41.226386  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:41.307652  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:41.307694  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:41.323271  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:41.323307  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:41.441151  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:41.441195  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:41.495438  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:41.495494  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:41.543879  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:41.543930  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:41.595010  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:41.595055  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:44.140048  303063 system_pods.go:59] 8 kube-system pods found
	I0920 19:09:44.140078  303063 system_pods.go:61] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running
	I0920 19:09:44.140083  303063 system_pods.go:61] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running
	I0920 19:09:44.140087  303063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running
	I0920 19:09:44.140091  303063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running
	I0920 19:09:44.140094  303063 system_pods.go:61] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running
	I0920 19:09:44.140097  303063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running
	I0920 19:09:44.140104  303063 system_pods.go:61] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:44.140108  303063 system_pods.go:61] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running
	I0920 19:09:44.140115  303063 system_pods.go:74] duration metric: took 3.921056539s to wait for pod list to return data ...
	I0920 19:09:44.140122  303063 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:09:44.143381  303063 default_sa.go:45] found service account: "default"
	I0920 19:09:44.143409  303063 default_sa.go:55] duration metric: took 3.281031ms for default service account to be created ...
	I0920 19:09:44.143422  303063 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:09:44.148161  303063 system_pods.go:86] 8 kube-system pods found
	I0920 19:09:44.148191  303063 system_pods.go:89] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running
	I0920 19:09:44.148199  303063 system_pods.go:89] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running
	I0920 19:09:44.148205  303063 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running
	I0920 19:09:44.148212  303063 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running
	I0920 19:09:44.148216  303063 system_pods.go:89] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running
	I0920 19:09:44.148221  303063 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running
	I0920 19:09:44.148230  303063 system_pods.go:89] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:44.148236  303063 system_pods.go:89] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running
	I0920 19:09:44.148248  303063 system_pods.go:126] duration metric: took 4.819429ms to wait for k8s-apps to be running ...
	I0920 19:09:44.148260  303063 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:09:44.148312  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:44.163839  303063 system_svc.go:56] duration metric: took 15.568956ms WaitForService to wait for kubelet
	I0920 19:09:44.163882  303063 kubeadm.go:582] duration metric: took 4m25.468555427s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:09:44.163911  303063 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:09:44.167622  303063 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:09:44.167656  303063 node_conditions.go:123] node cpu capacity is 2
	I0920 19:09:44.167671  303063 node_conditions.go:105] duration metric: took 3.752828ms to run NodePressure ...
	I0920 19:09:44.167690  303063 start.go:241] waiting for startup goroutines ...
	I0920 19:09:44.167700  303063 start.go:246] waiting for cluster config update ...
	I0920 19:09:44.167716  303063 start.go:255] writing updated cluster config ...
	I0920 19:09:44.168208  303063 ssh_runner.go:195] Run: rm -f paused
	I0920 19:09:44.223860  303063 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:09:44.226056  303063 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-612312" cluster and "default" namespace by default
	I0920 19:09:39.373109  302869 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002236347s
	I0920 19:09:39.373229  302869 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:09:44.375102  302869 kubeadm.go:310] [api-check] The API server is healthy after 5.001998039s
	I0920 19:09:44.405405  302869 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:09:44.428364  302869 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:09:44.470575  302869 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:09:44.470870  302869 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-339897 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:09:44.505469  302869 kubeadm.go:310] [bootstrap-token] Using token: v5zzut.gmtb3j9b0yqqwvtv
	I0920 19:09:44.507561  302869 out.go:235]   - Configuring RBAC rules ...
	I0920 19:09:44.507721  302869 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:09:44.522092  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:09:44.555238  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:09:44.559971  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:09:44.566954  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:09:44.574111  302869 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:09:44.788900  302869 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:09:45.229897  302869 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:09:45.788397  302869 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:09:45.789415  302869 kubeadm.go:310] 
	I0920 19:09:45.789504  302869 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:09:45.789516  302869 kubeadm.go:310] 
	I0920 19:09:45.789614  302869 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:09:45.789631  302869 kubeadm.go:310] 
	I0920 19:09:45.789664  302869 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:09:45.789804  302869 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:09:45.789897  302869 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:09:45.789930  302869 kubeadm.go:310] 
	I0920 19:09:45.790043  302869 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:09:45.790061  302869 kubeadm.go:310] 
	I0920 19:09:45.790130  302869 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:09:45.790145  302869 kubeadm.go:310] 
	I0920 19:09:45.790203  302869 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:09:45.790269  302869 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:09:45.790330  302869 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:09:45.790337  302869 kubeadm.go:310] 
	I0920 19:09:45.790438  302869 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:09:45.790549  302869 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:09:45.790563  302869 kubeadm.go:310] 
	I0920 19:09:45.790664  302869 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v5zzut.gmtb3j9b0yqqwvtv \
	I0920 19:09:45.790792  302869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 19:09:45.790823  302869 kubeadm.go:310] 	--control-plane 
	I0920 19:09:45.790835  302869 kubeadm.go:310] 
	I0920 19:09:45.790962  302869 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:09:45.790977  302869 kubeadm.go:310] 
	I0920 19:09:45.791045  302869 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v5zzut.gmtb3j9b0yqqwvtv \
	I0920 19:09:45.791164  302869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 19:09:45.792825  302869 kubeadm.go:310] W0920 19:09:36.880654    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:09:45.793122  302869 kubeadm.go:310] W0920 19:09:36.881516    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:09:45.793273  302869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:09:45.793317  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:09:45.793331  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:09:45.795282  302869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:09:42.464639  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:44.464714  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:45.796961  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:09:45.808972  302869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:09:45.831122  302869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:09:45.831174  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:45.831208  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-339897 minikube.k8s.io/updated_at=2024_09_20T19_09_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=embed-certs-339897 minikube.k8s.io/primary=true
	I0920 19:09:46.057677  302869 ops.go:34] apiserver oom_adj: -16
	I0920 19:09:46.057798  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:46.558670  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:47.057876  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:47.558913  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:48.057925  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:48.557985  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:49.057925  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:49.558500  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:50.058507  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:50.198032  302869 kubeadm.go:1113] duration metric: took 4.366908909s to wait for elevateKubeSystemPrivileges
	I0920 19:09:50.198074  302869 kubeadm.go:394] duration metric: took 5m1.087269263s to StartCluster
	I0920 19:09:50.198100  302869 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:09:50.198209  302869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:09:50.200736  302869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:09:50.201068  302869 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:09:50.201327  302869 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:09:50.201393  302869 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:09:50.201482  302869 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-339897"
	I0920 19:09:50.201502  302869 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-339897"
	W0920 19:09:50.201512  302869 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:09:50.201542  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.202007  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202050  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.202261  302869 addons.go:69] Setting default-storageclass=true in profile "embed-certs-339897"
	I0920 19:09:50.202285  302869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-339897"
	I0920 19:09:50.202285  302869 addons.go:69] Setting metrics-server=true in profile "embed-certs-339897"
	I0920 19:09:50.202311  302869 addons.go:234] Setting addon metrics-server=true in "embed-certs-339897"
	W0920 19:09:50.202319  302869 addons.go:243] addon metrics-server should already be in state true
	I0920 19:09:50.202349  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.202688  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202752  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.202755  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202793  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.203329  302869 out.go:177] * Verifying Kubernetes components...
	I0920 19:09:50.204655  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:09:50.224081  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46289
	I0920 19:09:50.224334  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45801
	I0920 19:09:50.224337  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0920 19:09:50.224579  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.224941  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.225039  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.225214  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225231  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.225643  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.225682  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225699  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.225798  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225818  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.226018  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.226080  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.226564  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.226594  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.226777  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.227444  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.227494  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.229747  302869 addons.go:234] Setting addon default-storageclass=true in "embed-certs-339897"
	W0920 19:09:50.229771  302869 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:09:50.229803  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.230208  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.230261  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.243865  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I0920 19:09:50.244292  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.244828  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.244851  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.245080  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0920 19:09:50.245252  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.245714  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.245810  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.246303  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.246323  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.246661  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.246806  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.248050  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.248671  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.250223  302869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:09:50.250319  302869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:09:46.963562  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:48.965266  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:50.250485  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38237
	I0920 19:09:50.250954  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.251418  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.251435  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.251535  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:09:50.251556  302869 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:09:50.251594  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.251680  302869 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:09:50.251693  302869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:09:50.251706  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.251889  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.252452  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.252502  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.255422  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.255692  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.255902  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.255928  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.256372  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.256396  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.256442  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.256663  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.256697  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.256840  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.256868  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.257066  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.257089  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.257268  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.272424  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I0920 19:09:50.273107  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.273729  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.273746  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.274208  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.274402  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.276189  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.276384  302869 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:09:50.276399  302869 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:09:50.276417  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.279319  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.279718  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.279747  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.279850  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.280044  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.280305  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.280481  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.407262  302869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:09:50.455491  302869 node_ready.go:35] waiting up to 6m0s for node "embed-certs-339897" to be "Ready" ...
	I0920 19:09:50.503634  302869 node_ready.go:49] node "embed-certs-339897" has status "Ready":"True"
	I0920 19:09:50.503663  302869 node_ready.go:38] duration metric: took 48.13478ms for node "embed-certs-339897" to be "Ready" ...
	I0920 19:09:50.503672  302869 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:50.532327  302869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:50.589446  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:09:50.589482  302869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:09:50.613277  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:09:50.619161  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:09:50.662197  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:09:50.662232  302869 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:09:50.753073  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:09:50.753106  302869 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:09:50.842679  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:09:51.790932  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171721983s)
	I0920 19:09:51.790997  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791012  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791029  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.177708427s)
	I0920 19:09:51.791073  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791089  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791380  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.791438  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.791444  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.791483  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791380  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.791527  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.791541  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791556  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791416  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.791493  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.793128  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.793159  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.793177  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.793149  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.793148  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.793208  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.820906  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.820939  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.821290  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.821312  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.003182  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.160452395s)
	I0920 19:09:52.003247  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:52.003261  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:52.003593  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:52.003600  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:52.003622  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.003632  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:52.003640  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:52.003985  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:52.004003  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.004017  302869 addons.go:475] Verifying addon metrics-server=true in "embed-certs-339897"
	I0920 19:09:52.006444  302869 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 19:09:52.008313  302869 addons.go:510] duration metric: took 1.806914162s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 19:09:52.539578  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:53.539999  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:53.540026  302869 pod_ready.go:82] duration metric: took 3.007669334s for pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:53.540036  302869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:51.463340  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:53.963461  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:55.547997  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:57.552686  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.552714  302869 pod_ready.go:82] duration metric: took 4.01267227s for pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.552724  302869 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.560885  302869 pod_ready.go:93] pod "etcd-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.560910  302869 pod_ready.go:82] duration metric: took 8.179457ms for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.560919  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.577414  302869 pod_ready.go:93] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.577441  302869 pod_ready.go:82] duration metric: took 16.515029ms for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.577451  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.588547  302869 pod_ready.go:93] pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.588574  302869 pod_ready.go:82] duration metric: took 11.116334ms for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.588583  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-whcbh" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.594919  302869 pod_ready.go:93] pod "kube-proxy-whcbh" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.594942  302869 pod_ready.go:82] duration metric: took 6.35266ms for pod "kube-proxy-whcbh" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.594951  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.943559  302869 pod_ready.go:93] pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.943585  302869 pod_ready.go:82] duration metric: took 348.626555ms for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.943592  302869 pod_ready.go:39] duration metric: took 7.439908161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:57.943609  302869 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:09:57.943662  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:57.959537  302869 api_server.go:72] duration metric: took 7.758426976s to wait for apiserver process to appear ...
	I0920 19:09:57.959567  302869 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:09:57.959594  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:09:57.964316  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 200:
	ok
	I0920 19:09:57.965668  302869 api_server.go:141] control plane version: v1.31.1
	I0920 19:09:57.965690  302869 api_server.go:131] duration metric: took 6.115168ms to wait for apiserver health ...
	I0920 19:09:57.965697  302869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:09:58.148306  302869 system_pods.go:59] 9 kube-system pods found
	I0920 19:09:58.148339  302869 system_pods.go:61] "coredns-7c65d6cfc9-2zlww" [5eb78763-7160-4ae9-80c3-87a82a6dc992] Running
	I0920 19:09:58.148345  302869 system_pods.go:61] "coredns-7c65d6cfc9-7fxdr" [85a441e8-39b0-4623-a7bd-eebbd1574f20] Running
	I0920 19:09:58.148349  302869 system_pods.go:61] "etcd-embed-certs-339897" [150a2276-3896-498e-89f7-44cf4554da69] Running
	I0920 19:09:58.148352  302869 system_pods.go:61] "kube-apiserver-embed-certs-339897" [396520a3-2567-4267-852d-9f9525dd5e01] Running
	I0920 19:09:58.148356  302869 system_pods.go:61] "kube-controller-manager-embed-certs-339897" [7f64ad97-3230-4cf5-92ad-cf58ef88a2b0] Running
	I0920 19:09:58.148359  302869 system_pods.go:61] "kube-proxy-whcbh" [3a2dbb60-1a51-4874-98b8-75d1a35b0512] Running
	I0920 19:09:58.148361  302869 system_pods.go:61] "kube-scheduler-embed-certs-339897" [31214783-f8cf-46c6-a305-fde7692dfc72] Running
	I0920 19:09:58.148367  302869 system_pods.go:61] "metrics-server-6867b74b74-tw9fh" [8366591d-8916-4b9f-be8a-64ddc185f576] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:58.148371  302869 system_pods.go:61] "storage-provisioner" [8bcc482a-6905-436a-8d90-7eee9ba18f8b] Running
	I0920 19:09:58.148381  302869 system_pods.go:74] duration metric: took 182.677921ms to wait for pod list to return data ...
	I0920 19:09:58.148387  302869 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:09:58.344318  302869 default_sa.go:45] found service account: "default"
	I0920 19:09:58.344346  302869 default_sa.go:55] duration metric: took 195.952788ms for default service account to be created ...
	I0920 19:09:58.344357  302869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:09:58.547996  302869 system_pods.go:86] 9 kube-system pods found
	I0920 19:09:58.548034  302869 system_pods.go:89] "coredns-7c65d6cfc9-2zlww" [5eb78763-7160-4ae9-80c3-87a82a6dc992] Running
	I0920 19:09:58.548043  302869 system_pods.go:89] "coredns-7c65d6cfc9-7fxdr" [85a441e8-39b0-4623-a7bd-eebbd1574f20] Running
	I0920 19:09:58.548048  302869 system_pods.go:89] "etcd-embed-certs-339897" [150a2276-3896-498e-89f7-44cf4554da69] Running
	I0920 19:09:58.548054  302869 system_pods.go:89] "kube-apiserver-embed-certs-339897" [396520a3-2567-4267-852d-9f9525dd5e01] Running
	I0920 19:09:58.548060  302869 system_pods.go:89] "kube-controller-manager-embed-certs-339897" [7f64ad97-3230-4cf5-92ad-cf58ef88a2b0] Running
	I0920 19:09:58.548066  302869 system_pods.go:89] "kube-proxy-whcbh" [3a2dbb60-1a51-4874-98b8-75d1a35b0512] Running
	I0920 19:09:58.548070  302869 system_pods.go:89] "kube-scheduler-embed-certs-339897" [31214783-f8cf-46c6-a305-fde7692dfc72] Running
	I0920 19:09:58.548079  302869 system_pods.go:89] "metrics-server-6867b74b74-tw9fh" [8366591d-8916-4b9f-be8a-64ddc185f576] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:58.548085  302869 system_pods.go:89] "storage-provisioner" [8bcc482a-6905-436a-8d90-7eee9ba18f8b] Running
	I0920 19:09:58.548099  302869 system_pods.go:126] duration metric: took 203.735171ms to wait for k8s-apps to be running ...
	I0920 19:09:58.548108  302869 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:09:58.548165  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:58.563235  302869 system_svc.go:56] duration metric: took 15.107997ms WaitForService to wait for kubelet
	I0920 19:09:58.563274  302869 kubeadm.go:582] duration metric: took 8.362165276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:09:58.563299  302869 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:09:58.744093  302869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:09:58.744155  302869 node_conditions.go:123] node cpu capacity is 2
	I0920 19:09:58.744171  302869 node_conditions.go:105] duration metric: took 180.864643ms to run NodePressure ...
	I0920 19:09:58.744186  302869 start.go:241] waiting for startup goroutines ...
	I0920 19:09:58.744196  302869 start.go:246] waiting for cluster config update ...
	I0920 19:09:58.744220  302869 start.go:255] writing updated cluster config ...
	I0920 19:09:58.744526  302869 ssh_runner.go:195] Run: rm -f paused
	I0920 19:09:58.794946  302869 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:09:58.797418  302869 out.go:177] * Done! kubectl is now configured to use "embed-certs-339897" cluster and "default" namespace by default
	I0920 19:09:56.464024  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:58.464282  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:00.963419  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:02.963506  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:04.963804  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:07.463546  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:09.962855  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:11.963447  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:13.964915  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:17.296411  303486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:10:17.296525  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:17.296765  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:16.462968  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:18.963906  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:22.297630  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:22.297923  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:21.463201  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:22.457112  302538 pod_ready.go:82] duration metric: took 4m0.000881628s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" ...
	E0920 19:10:22.457161  302538 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 19:10:22.457180  302538 pod_ready.go:39] duration metric: took 4m14.047738931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:10:22.457208  302538 kubeadm.go:597] duration metric: took 4m21.028566787s to restartPrimaryControlPlane
	W0920 19:10:22.457265  302538 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:10:22.457291  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:10:32.298239  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:32.298525  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:48.632052  302538 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.17473972s)
	I0920 19:10:48.632143  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:10:48.648205  302538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:10:48.658969  302538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:10:48.668954  302538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:10:48.668981  302538 kubeadm.go:157] found existing configuration files:
	
	I0920 19:10:48.669035  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:10:48.678138  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:10:48.678229  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:10:48.687960  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:10:48.697578  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:10:48.697644  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:10:48.707573  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:10:48.717059  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:10:48.717123  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:10:48.727642  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:10:48.737599  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:10:48.737681  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:10:48.749542  302538 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:10:48.795278  302538 kubeadm.go:310] W0920 19:10:48.780113    2961 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:10:48.796096  302538 kubeadm.go:310] W0920 19:10:48.780928    2961 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:10:48.910958  302538 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:10:52.299257  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:52.299561  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:56.716717  302538 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:10:56.716805  302538 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:10:56.716938  302538 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:10:56.717078  302538 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:10:56.717170  302538 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:10:56.717225  302538 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:10:56.719086  302538 out.go:235]   - Generating certificates and keys ...
	I0920 19:10:56.719199  302538 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:10:56.719286  302538 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:10:56.719407  302538 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:10:56.719505  302538 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:10:56.719624  302538 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:10:56.719720  302538 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:10:56.719811  302538 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:10:56.719928  302538 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:10:56.720049  302538 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:10:56.720154  302538 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:10:56.720224  302538 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:10:56.720287  302538 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:10:56.720334  302538 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:10:56.720386  302538 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:10:56.720432  302538 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:10:56.720486  302538 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:10:56.720533  302538 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:10:56.720606  302538 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:10:56.720701  302538 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:10:56.722504  302538 out.go:235]   - Booting up control plane ...
	I0920 19:10:56.722620  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:10:56.722748  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:10:56.722872  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:10:56.723020  302538 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:10:56.723105  302538 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:10:56.723148  302538 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:10:56.723337  302538 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:10:56.723455  302538 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:10:56.723515  302538 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.448196ms
	I0920 19:10:56.723612  302538 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:10:56.723706  302538 kubeadm.go:310] [api-check] The API server is healthy after 5.001495273s
	I0920 19:10:56.723888  302538 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:10:56.724046  302538 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:10:56.724131  302538 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:10:56.724406  302538 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-037711 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:10:56.724464  302538 kubeadm.go:310] [bootstrap-token] Using token: 2hi1gl.ipidz4nvj8gip8th
	I0920 19:10:56.726099  302538 out.go:235]   - Configuring RBAC rules ...
	I0920 19:10:56.726212  302538 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:10:56.726315  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:10:56.726479  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:10:56.726641  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:10:56.726794  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:10:56.726926  302538 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:10:56.727082  302538 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:10:56.727154  302538 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:10:56.727202  302538 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:10:56.727209  302538 kubeadm.go:310] 
	I0920 19:10:56.727261  302538 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:10:56.727267  302538 kubeadm.go:310] 
	I0920 19:10:56.727363  302538 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:10:56.727383  302538 kubeadm.go:310] 
	I0920 19:10:56.727424  302538 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:10:56.727507  302538 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:10:56.727607  302538 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:10:56.727620  302538 kubeadm.go:310] 
	I0920 19:10:56.727699  302538 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:10:56.727712  302538 kubeadm.go:310] 
	I0920 19:10:56.727775  302538 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:10:56.727790  302538 kubeadm.go:310] 
	I0920 19:10:56.727865  302538 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:10:56.727969  302538 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:10:56.728032  302538 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:10:56.728038  302538 kubeadm.go:310] 
	I0920 19:10:56.728106  302538 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:10:56.728171  302538 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:10:56.728177  302538 kubeadm.go:310] 
	I0920 19:10:56.728271  302538 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2hi1gl.ipidz4nvj8gip8th \
	I0920 19:10:56.728406  302538 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 19:10:56.728438  302538 kubeadm.go:310] 	--control-plane 
	I0920 19:10:56.728451  302538 kubeadm.go:310] 
	I0920 19:10:56.728571  302538 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:10:56.728577  302538 kubeadm.go:310] 
	I0920 19:10:56.728675  302538 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2hi1gl.ipidz4nvj8gip8th \
	I0920 19:10:56.728823  302538 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 19:10:56.728837  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:10:56.728843  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:10:56.730851  302538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:10:56.732462  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:10:56.745326  302538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:10:56.764458  302538 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:10:56.764563  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:56.764620  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-037711 minikube.k8s.io/updated_at=2024_09_20T19_10_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=no-preload-037711 minikube.k8s.io/primary=true
	I0920 19:10:56.792026  302538 ops.go:34] apiserver oom_adj: -16
	I0920 19:10:56.976178  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:57.477172  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:57.977076  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:58.476357  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:58.977162  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:59.476924  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:59.976506  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:11:00.080925  302538 kubeadm.go:1113] duration metric: took 3.316440483s to wait for elevateKubeSystemPrivileges
	I0920 19:11:00.080968  302538 kubeadm.go:394] duration metric: took 4m58.701872852s to StartCluster
	I0920 19:11:00.080994  302538 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:11:00.081106  302538 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:11:00.082815  302538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:11:00.083064  302538 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:11:00.083137  302538 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:11:00.083243  302538 addons.go:69] Setting storage-provisioner=true in profile "no-preload-037711"
	I0920 19:11:00.083263  302538 addons.go:234] Setting addon storage-provisioner=true in "no-preload-037711"
	W0920 19:11:00.083272  302538 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:11:00.083263  302538 addons.go:69] Setting default-storageclass=true in profile "no-preload-037711"
	I0920 19:11:00.083299  302538 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-037711"
	I0920 19:11:00.083308  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.083304  302538 addons.go:69] Setting metrics-server=true in profile "no-preload-037711"
	I0920 19:11:00.083342  302538 addons.go:234] Setting addon metrics-server=true in "no-preload-037711"
	W0920 19:11:00.083354  302538 addons.go:243] addon metrics-server should already be in state true
	I0920 19:11:00.083385  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.083315  302538 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:11:00.083667  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083709  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083715  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.083750  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.083864  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083912  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.084969  302538 out.go:177] * Verifying Kubernetes components...
	I0920 19:11:00.086652  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:11:00.102128  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0920 19:11:00.102362  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33853
	I0920 19:11:00.102750  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0920 19:11:00.102879  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103041  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103431  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103635  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.103651  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.103767  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.103783  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.104022  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.104040  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.104042  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104180  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104383  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104394  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.104842  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.104881  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.104927  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.104963  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.107816  302538 addons.go:234] Setting addon default-storageclass=true in "no-preload-037711"
	W0920 19:11:00.107836  302538 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:11:00.107865  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.108193  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.108236  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.121661  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0920 19:11:00.122693  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.123520  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.123642  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.124299  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.124530  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.125624  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I0920 19:11:00.126343  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.126439  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I0920 19:11:00.126868  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.126947  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.127277  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.127302  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.127572  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.127599  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.127646  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.127902  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.128095  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.128318  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.128360  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.129099  302538 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:11:00.129788  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.130688  302538 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:11:00.130713  302538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:11:00.130732  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.131393  302538 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:11:00.132404  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:11:00.132432  302538 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:11:00.132454  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.134112  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.134627  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.134690  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.135041  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.135215  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.135448  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.135550  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.136315  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.136816  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.136849  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.137011  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.137231  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.137409  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.137589  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.166369  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0920 19:11:00.166884  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.167464  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.167483  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.167850  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.168037  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.169668  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.169875  302538 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:11:00.169891  302538 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:11:00.169925  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.172907  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.173383  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.173416  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.173577  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.173820  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.174010  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.174212  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.275468  302538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:11:00.290839  302538 node_ready.go:35] waiting up to 6m0s for node "no-preload-037711" to be "Ready" ...
	I0920 19:11:00.300222  302538 node_ready.go:49] node "no-preload-037711" has status "Ready":"True"
	I0920 19:11:00.300244  302538 node_ready.go:38] duration metric: took 9.368069ms for node "no-preload-037711" to be "Ready" ...
	I0920 19:11:00.300253  302538 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:11:00.306099  302538 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:00.364927  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:11:00.364956  302538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:11:00.382910  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:11:00.392581  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:11:00.392611  302538 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:11:00.404275  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:11:00.442677  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:11:00.442707  302538 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:11:00.500976  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:11:01.337157  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337196  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337169  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337265  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337558  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.337573  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.337600  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337613  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.337641  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337649  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337685  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337702  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.337711  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337720  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337961  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337978  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.338064  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.338114  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.338133  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.395956  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.395989  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.396327  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.396355  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580133  302538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.079115769s)
	I0920 19:11:01.580188  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.580203  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.580548  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.580568  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580578  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.580586  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.580817  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.580842  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580853  302538 addons.go:475] Verifying addon metrics-server=true in "no-preload-037711"
	I0920 19:11:01.582786  302538 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 19:11:01.584283  302538 addons.go:510] duration metric: took 1.501156808s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 19:11:02.314471  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:04.817174  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:07.312399  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:07.812969  302538 pod_ready.go:93] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:07.812999  302538 pod_ready.go:82] duration metric: took 7.506877081s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.813008  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.818172  302538 pod_ready.go:93] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:07.818200  302538 pod_ready.go:82] duration metric: took 5.184579ms for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.818211  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:09.825772  302538 pod_ready.go:103] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:10.325453  302538 pod_ready.go:93] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:10.325479  302538 pod_ready.go:82] duration metric: took 2.507262085s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.325489  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.331181  302538 pod_ready.go:93] pod "kube-scheduler-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:10.331208  302538 pod_ready.go:82] duration metric: took 5.711573ms for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.331216  302538 pod_ready.go:39] duration metric: took 10.030954081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:11:10.331233  302538 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:11:10.331286  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:11:10.348104  302538 api_server.go:72] duration metric: took 10.265008499s to wait for apiserver process to appear ...
	I0920 19:11:10.348135  302538 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:11:10.348157  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:11:10.352242  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0920 19:11:10.353228  302538 api_server.go:141] control plane version: v1.31.1
	I0920 19:11:10.353249  302538 api_server.go:131] duration metric: took 5.107446ms to wait for apiserver health ...
	I0920 19:11:10.353257  302538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:11:10.358560  302538 system_pods.go:59] 9 kube-system pods found
	I0920 19:11:10.358588  302538 system_pods.go:61] "coredns-7c65d6cfc9-gdfh9" [61c6d6d8-62b9-4db3-a3c3-fd0daec82a9f] Running
	I0920 19:11:10.358593  302538 system_pods.go:61] "coredns-7c65d6cfc9-h84nm" [6ada3ba7-1ccd-474b-850b-c00a77dfbb92] Running
	I0920 19:11:10.358597  302538 system_pods.go:61] "etcd-no-preload-037711" [9ace2dcd-0562-46d5-99be-65be4ea053d9] Running
	I0920 19:11:10.358601  302538 system_pods.go:61] "kube-apiserver-no-preload-037711" [1dbfa130-d2dd-420d-a32c-1e82b535c112] Running
	I0920 19:11:10.358604  302538 system_pods.go:61] "kube-controller-manager-no-preload-037711" [56462390-dedd-4281-ac85-2671f7a10cb1] Running
	I0920 19:11:10.358607  302538 system_pods.go:61] "kube-proxy-bvfqh" [2170ef3f-58f0-4d42-9f15-d9c952e0e2ec] Running
	I0920 19:11:10.358610  302538 system_pods.go:61] "kube-scheduler-no-preload-037711" [e996ce53-7ee6-4d1d-bd0b-8188d76966b9] Running
	I0920 19:11:10.358617  302538 system_pods.go:61] "metrics-server-6867b74b74-rpfqm" [ba7c8518-6c3e-4751-a9a5-29c77990a29c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:11:10.358620  302538 system_pods.go:61] "storage-provisioner" [e7f05c0a-c6be-4e68-959e-966c17c9cc5e] Running
	I0920 19:11:10.358629  302538 system_pods.go:74] duration metric: took 5.365343ms to wait for pod list to return data ...
	I0920 19:11:10.358635  302538 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:11:10.361229  302538 default_sa.go:45] found service account: "default"
	I0920 19:11:10.361255  302538 default_sa.go:55] duration metric: took 2.612292ms for default service account to be created ...
	I0920 19:11:10.361264  302538 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:11:10.367188  302538 system_pods.go:86] 9 kube-system pods found
	I0920 19:11:10.367221  302538 system_pods.go:89] "coredns-7c65d6cfc9-gdfh9" [61c6d6d8-62b9-4db3-a3c3-fd0daec82a9f] Running
	I0920 19:11:10.367229  302538 system_pods.go:89] "coredns-7c65d6cfc9-h84nm" [6ada3ba7-1ccd-474b-850b-c00a77dfbb92] Running
	I0920 19:11:10.367235  302538 system_pods.go:89] "etcd-no-preload-037711" [9ace2dcd-0562-46d5-99be-65be4ea053d9] Running
	I0920 19:11:10.367241  302538 system_pods.go:89] "kube-apiserver-no-preload-037711" [1dbfa130-d2dd-420d-a32c-1e82b535c112] Running
	I0920 19:11:10.367248  302538 system_pods.go:89] "kube-controller-manager-no-preload-037711" [56462390-dedd-4281-ac85-2671f7a10cb1] Running
	I0920 19:11:10.367254  302538 system_pods.go:89] "kube-proxy-bvfqh" [2170ef3f-58f0-4d42-9f15-d9c952e0e2ec] Running
	I0920 19:11:10.367260  302538 system_pods.go:89] "kube-scheduler-no-preload-037711" [e996ce53-7ee6-4d1d-bd0b-8188d76966b9] Running
	I0920 19:11:10.367267  302538 system_pods.go:89] "metrics-server-6867b74b74-rpfqm" [ba7c8518-6c3e-4751-a9a5-29c77990a29c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:11:10.367273  302538 system_pods.go:89] "storage-provisioner" [e7f05c0a-c6be-4e68-959e-966c17c9cc5e] Running
	I0920 19:11:10.367283  302538 system_pods.go:126] duration metric: took 6.01247ms to wait for k8s-apps to be running ...
	I0920 19:11:10.367292  302538 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:11:10.367354  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:11:10.381551  302538 system_svc.go:56] duration metric: took 14.250301ms WaitForService to wait for kubelet
	I0920 19:11:10.381582  302538 kubeadm.go:582] duration metric: took 10.298492318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:11:10.381601  302538 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:11:10.385405  302538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:11:10.385442  302538 node_conditions.go:123] node cpu capacity is 2
	I0920 19:11:10.385455  302538 node_conditions.go:105] duration metric: took 3.849463ms to run NodePressure ...
	I0920 19:11:10.385468  302538 start.go:241] waiting for startup goroutines ...
	I0920 19:11:10.385474  302538 start.go:246] waiting for cluster config update ...
	I0920 19:11:10.385485  302538 start.go:255] writing updated cluster config ...
	I0920 19:11:10.385786  302538 ssh_runner.go:195] Run: rm -f paused
	I0920 19:11:10.436362  302538 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:11:10.438538  302538 out.go:177] * Done! kubectl is now configured to use "no-preload-037711" cluster and "default" namespace by default
	I0920 19:11:32.301334  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:11:32.302020  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:11:32.302048  303486 kubeadm.go:310] 
	I0920 19:11:32.302147  303486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:11:32.302252  303486 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:11:32.302279  303486 kubeadm.go:310] 
	I0920 19:11:32.302366  303486 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:11:32.302453  303486 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:11:32.302713  303486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:11:32.302731  303486 kubeadm.go:310] 
	I0920 19:11:32.303023  303486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:11:32.303099  303486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:11:32.303200  303486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:11:32.303232  303486 kubeadm.go:310] 
	I0920 19:11:32.303438  303486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:11:32.303669  303486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:11:32.303699  303486 kubeadm.go:310] 
	I0920 19:11:32.303965  303486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:11:32.304199  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:11:32.304410  303486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:11:32.304577  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:11:32.304624  303486 kubeadm.go:310] 
	I0920 19:11:32.305105  303486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:11:32.305465  303486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:11:32.305655  303486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 19:11:32.305713  303486 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 19:11:32.305758  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:11:32.760742  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:11:32.775675  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:11:32.785785  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:11:32.785806  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:11:32.785854  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:11:32.795133  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:11:32.795210  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:11:32.805681  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:11:32.815299  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:11:32.815362  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:11:32.827215  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:11:32.836597  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:11:32.836682  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:11:32.846621  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:11:32.855610  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:11:32.855675  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:11:32.866824  303486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:11:33.103745  303486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:13:29.101212  303486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:13:29.101347  303486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 19:13:29.103031  303486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:13:29.103142  303486 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:13:29.103216  303486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:13:29.103318  303486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:13:29.103437  303486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:13:29.103507  303486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:13:29.105521  303486 out.go:235]   - Generating certificates and keys ...
	I0920 19:13:29.105622  303486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:13:29.105704  303486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:13:29.105820  303486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:13:29.105955  303486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:13:29.106058  303486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:13:29.106132  303486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:13:29.106219  303486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:13:29.106318  303486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:13:29.106430  303486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:13:29.106548  303486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:13:29.106611  303486 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:13:29.106699  303486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:13:29.106766  303486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:13:29.106844  303486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:13:29.106935  303486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:13:29.107011  303486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:13:29.107117  303486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:13:29.107223  303486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:13:29.107289  303486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:13:29.107376  303486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:13:29.108804  303486 out.go:235]   - Booting up control plane ...
	I0920 19:13:29.108952  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:13:29.109021  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:13:29.109082  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:13:29.109166  303486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:13:29.109313  303486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:13:29.109359  303486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:13:29.109462  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.109630  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.109699  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.109878  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.109966  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110133  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110213  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110382  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110441  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110606  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110616  303486 kubeadm.go:310] 
	I0920 19:13:29.110661  303486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:13:29.110699  303486 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:13:29.110706  303486 kubeadm.go:310] 
	I0920 19:13:29.110739  303486 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:13:29.110769  303486 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:13:29.110866  303486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:13:29.110875  303486 kubeadm.go:310] 
	I0920 19:13:29.110969  303486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:13:29.111003  303486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:13:29.111031  303486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:13:29.111037  303486 kubeadm.go:310] 
	I0920 19:13:29.111141  303486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:13:29.111224  303486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:13:29.111231  303486 kubeadm.go:310] 
	I0920 19:13:29.111327  303486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:13:29.111407  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:13:29.111481  303486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:13:29.111542  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:13:29.111610  303486 kubeadm.go:394] duration metric: took 7m56.768319159s to StartCluster
	I0920 19:13:29.111640  303486 kubeadm.go:310] 
	I0920 19:13:29.111664  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:13:29.111734  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:13:29.157817  303486 cri.go:89] found id: ""
	I0920 19:13:29.157849  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.157859  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:13:29.157867  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:13:29.157950  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:13:29.192130  303486 cri.go:89] found id: ""
	I0920 19:13:29.192164  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.192179  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:13:29.192187  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:13:29.192243  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:13:29.227594  303486 cri.go:89] found id: ""
	I0920 19:13:29.227631  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.227642  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:13:29.227651  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:13:29.227724  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:13:29.261948  303486 cri.go:89] found id: ""
	I0920 19:13:29.261981  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.261996  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:13:29.262004  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:13:29.262072  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:13:29.295148  303486 cri.go:89] found id: ""
	I0920 19:13:29.295181  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.295191  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:13:29.295200  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:13:29.295270  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:13:29.328094  303486 cri.go:89] found id: ""
	I0920 19:13:29.328127  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.328135  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:13:29.328142  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:13:29.328194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:13:29.368830  303486 cri.go:89] found id: ""
	I0920 19:13:29.368870  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.368878  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:13:29.368885  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:13:29.368947  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:13:29.420051  303486 cri.go:89] found id: ""
	I0920 19:13:29.420081  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.420091  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:13:29.420106  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:13:29.420123  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:13:29.498322  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:13:29.498350  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:13:29.498364  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:13:29.601796  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:13:29.601842  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:13:29.644325  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:13:29.644368  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:13:29.692691  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:13:29.692736  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0920 19:13:29.707508  303486 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 19:13:29.707577  303486 out.go:270] * 
	W0920 19:13:29.707646  303486 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:13:29.707664  303486 out.go:270] * 
	W0920 19:13:29.708560  303486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 19:13:29.711313  303486 out.go:201] 
	W0920 19:13:29.712520  303486 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:13:29.712553  303486 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 19:13:29.712576  303486 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 19:13:29.713832  303486 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.032319839Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6af4a57-68f8-47dd-9432-79fd2324eef9 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.033415796Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd3c792c-882a-4ee9-9f30-0740dc14a42e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.033860656Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859941033836824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd3c792c-882a-4ee9-9f30-0740dc14a42e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.034840396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98e67b64-37e3-4e01-9a16-f3e6e22ed86d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.034985551Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98e67b64-37e3-4e01-9a16-f3e6e22ed86d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.035265987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2723b5d731b534edb71c51be36dec0081571147745dfe4442c1ee88181556806,PodSandboxId:d2208cdae4af7ced77ea12dbd1e3947a0da5374812086aa97b51738d6b35e3df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859392193511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bcc482a-6905-436a-8d90-7eee9ba18f8b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fd0790b5de170f5af7f48ee38678a47bf873c54e6758b49e81ce86fbe9611,PodSandboxId:8ac9258f316c5409c0553943663ab92d1b23c2be46e527c2a3eae7f1c2acc8c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391469683495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a441e8-39b0-4623-a7bd-eebbd1574f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4018f260defa1395c4657499327bc4925863a9f2b0cb4ca64a1fff4603ffe1f,PodSandboxId:24021bdf4ca6509fd90d9b58cb2b724be60be9a1ffb3285df8076ca2168b7feb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391039092866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2zlww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
eb78763-7160-4ae9-80c3-87a82a6dc992,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c364c544d7d7f031ff432964306d762430675bdcc271d3bab8e950e2a8f7fc28,PodSandboxId:e7ef26e4c8f0c653bd9881855e65395bf79dcaf6fdaddeae4dc5e946c60146a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726859390740091257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-whcbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2dbb60-1a51-4874-98b8-75d1a35b0512,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ccd1fc6b8f8df2809889e166b96a1b6c2f3430bc180ebf5a173b2821961e388,PodSandboxId:6d28730d9d07ce3c7619c35ffb691940a085f62bcb611da954fd68df457e9c5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859379711547432,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84766edb8659eb295c2d46988cdb09d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf43e654caeb5f98cdf5e1da55898b1b9522078243dc9fc219566b5edb16a0f8,PodSandboxId:089d0106a913c501ade84880df8565badec9254b77b75dde21d58d61626d8eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859379751370179,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509a6ad1e89e1bc816872362edf1d642,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cf7beb540ba5cb3ff61d3baf08eafacc28e1e1c95c33cb69a1032e335f1ed53,PodSandboxId:b29721bb4db6b41c0e4c1131aa8113f2e97ae8410a53507d8263ca02504b281f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859379726914209,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb06bd75ec7169fc5a461bfe6e3646d39d5a4ee55e2ccaa859a7ed4a2d4a2f0,PodSandboxId:b6cdc4bbe6592859f144838f4dcf92504c58b9608f3a27516c156df6414d15d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859379680697813,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c4226e1838e7a3ea47eacc9d8a2390,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aca960651b40b7a4edbec39fb2f9680b94b5bf9052d8e236bcb33f39f501413,PodSandboxId:662adc8c49ec5df33d030dc501e3b781286be011fd95b47104498c6591e70ef1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859092343694002,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98e67b64-37e3-4e01-9a16-f3e6e22ed86d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.071388356Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a28adfeb-e942-458a-a6f8-ba85638bdb28 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.071481805Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a28adfeb-e942-458a-a6f8-ba85638bdb28 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.072380020Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94f38b63-3f0f-429d-8770-9473a71fca9d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.072772441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859941072751153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94f38b63-3f0f-429d-8770-9473a71fca9d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.073488764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfa133eb-6400-4dc5-92fa-b35a884f5684 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.073551148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfa133eb-6400-4dc5-92fa-b35a884f5684 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.073763105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2723b5d731b534edb71c51be36dec0081571147745dfe4442c1ee88181556806,PodSandboxId:d2208cdae4af7ced77ea12dbd1e3947a0da5374812086aa97b51738d6b35e3df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859392193511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bcc482a-6905-436a-8d90-7eee9ba18f8b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fd0790b5de170f5af7f48ee38678a47bf873c54e6758b49e81ce86fbe9611,PodSandboxId:8ac9258f316c5409c0553943663ab92d1b23c2be46e527c2a3eae7f1c2acc8c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391469683495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a441e8-39b0-4623-a7bd-eebbd1574f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4018f260defa1395c4657499327bc4925863a9f2b0cb4ca64a1fff4603ffe1f,PodSandboxId:24021bdf4ca6509fd90d9b58cb2b724be60be9a1ffb3285df8076ca2168b7feb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391039092866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2zlww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
eb78763-7160-4ae9-80c3-87a82a6dc992,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c364c544d7d7f031ff432964306d762430675bdcc271d3bab8e950e2a8f7fc28,PodSandboxId:e7ef26e4c8f0c653bd9881855e65395bf79dcaf6fdaddeae4dc5e946c60146a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726859390740091257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-whcbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2dbb60-1a51-4874-98b8-75d1a35b0512,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ccd1fc6b8f8df2809889e166b96a1b6c2f3430bc180ebf5a173b2821961e388,PodSandboxId:6d28730d9d07ce3c7619c35ffb691940a085f62bcb611da954fd68df457e9c5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859379711547432,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84766edb8659eb295c2d46988cdb09d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf43e654caeb5f98cdf5e1da55898b1b9522078243dc9fc219566b5edb16a0f8,PodSandboxId:089d0106a913c501ade84880df8565badec9254b77b75dde21d58d61626d8eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859379751370179,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509a6ad1e89e1bc816872362edf1d642,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cf7beb540ba5cb3ff61d3baf08eafacc28e1e1c95c33cb69a1032e335f1ed53,PodSandboxId:b29721bb4db6b41c0e4c1131aa8113f2e97ae8410a53507d8263ca02504b281f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859379726914209,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb06bd75ec7169fc5a461bfe6e3646d39d5a4ee55e2ccaa859a7ed4a2d4a2f0,PodSandboxId:b6cdc4bbe6592859f144838f4dcf92504c58b9608f3a27516c156df6414d15d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859379680697813,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c4226e1838e7a3ea47eacc9d8a2390,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aca960651b40b7a4edbec39fb2f9680b94b5bf9052d8e236bcb33f39f501413,PodSandboxId:662adc8c49ec5df33d030dc501e3b781286be011fd95b47104498c6591e70ef1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859092343694002,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfa133eb-6400-4dc5-92fa-b35a884f5684 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.112229090Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4024968d-744e-49e0-97b2-98859aeec811 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.112355788Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4024968d-744e-49e0-97b2-98859aeec811 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.114588048Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7acd129e-51e6-47d3-a026-99f3b3af7a9e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.115527091Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859941115495231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7acd129e-51e6-47d3-a026-99f3b3af7a9e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.118651204Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=917a1cdf-4b9f-40e0-be8e-c28008a8a9d4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.119018087Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:caaf4d7bbf79df01283a900ef80188a7f6b3f9bba013a22bb6edbb6f9efc59ec,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-tw9fh,Uid:8366591d-8916-4b9f-be8a-64ddc185f576,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859392188603166,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-tw9fh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8366591d-8916-4b9f-be8a-64ddc185f576,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T19:09:51.873634285Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d2208cdae4af7ced77ea12dbd1e3947a0da5374812086aa97b51738d6b35e3df,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8bcc482a-6905-436a-8d90-7eee9ba18f8b,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859392098816607,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bcc482a-6905-436a-8d90-7eee9ba18f8b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-20T19:09:51.791734796Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ac9258f316c5409c0553943663ab92d1b23c2be46e527c2a3eae7f1c2acc8c2,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-7fxdr,Uid:85a441e8-39b0-4623-a7bd-eebbd1574f20,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859390617187549,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a441e8-39b0-4623-a7bd-eebbd1574f20,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T19:09:50.309163860Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:24021bdf4ca6509fd90d9b58cb2b724be60be9a1ffb3285df8076ca2168b7feb,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-2zlww,Uid:5eb78763-7160-4ae9
-80c3-87a82a6dc992,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859390469669976,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-2zlww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb78763-7160-4ae9-80c3-87a82a6dc992,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T19:09:50.161718151Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e7ef26e4c8f0c653bd9881855e65395bf79dcaf6fdaddeae4dc5e946c60146a0,Metadata:&PodSandboxMetadata{Name:kube-proxy-whcbh,Uid:3a2dbb60-1a51-4874-98b8-75d1a35b0512,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859390430840513,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-whcbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2dbb60-1a51-4874-98b8-75d1a35b0512,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T19:09:50.117320098Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:089d0106a913c501ade84880df8565badec9254b77b75dde21d58d61626d8eb3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-339897,Uid:509a6ad1e89e1bc816872362edf1d642,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859379506942891,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509a6ad1e89e1bc816872362edf1d642,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 509a6ad1e89e1bc816872362edf1d642,kubernetes.io/config.seen: 2024-09-20T19:09:39.069304897Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b29721bb4db6b41c0e4c1131aa8113f2e97ae8410a53507d8263ca02504b281f,Metadata:&PodSandboxM
etadata{Name:kube-apiserver-embed-certs-339897,Uid:f2fddbf0260d53ab7d82af8de05368be,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726859379505274336,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.72:8443,kubernetes.io/config.hash: f2fddbf0260d53ab7d82af8de05368be,kubernetes.io/config.seen: 2024-09-20T19:09:39.069311144Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b6cdc4bbe6592859f144838f4dcf92504c58b9608f3a27516c156df6414d15d0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-339897,Uid:74c4226e1838e7a3ea47eacc9d8a2390,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859379502626545,Labels:map[string]string{component: kub
e-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c4226e1838e7a3ea47eacc9d8a2390,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 74c4226e1838e7a3ea47eacc9d8a2390,kubernetes.io/config.seen: 2024-09-20T19:09:39.069308568Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6d28730d9d07ce3c7619c35ffb691940a085f62bcb611da954fd68df457e9c5d,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-339897,Uid:84766edb8659eb295c2d46988cdb09d1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859379498728654,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84766edb8659eb295c2d46988cdb09d1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72
.72:2379,kubernetes.io/config.hash: 84766edb8659eb295c2d46988cdb09d1,kubernetes.io/config.seen: 2024-09-20T19:09:39.069309801Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=917a1cdf-4b9f-40e0-be8e-c28008a8a9d4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.120194553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3a1740b-3c0d-47a5-990b-37b7eba26fbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.120263666Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3a1740b-3c0d-47a5-990b-37b7eba26fbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.120489515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2723b5d731b534edb71c51be36dec0081571147745dfe4442c1ee88181556806,PodSandboxId:d2208cdae4af7ced77ea12dbd1e3947a0da5374812086aa97b51738d6b35e3df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859392193511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bcc482a-6905-436a-8d90-7eee9ba18f8b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fd0790b5de170f5af7f48ee38678a47bf873c54e6758b49e81ce86fbe9611,PodSandboxId:8ac9258f316c5409c0553943663ab92d1b23c2be46e527c2a3eae7f1c2acc8c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391469683495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a441e8-39b0-4623-a7bd-eebbd1574f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4018f260defa1395c4657499327bc4925863a9f2b0cb4ca64a1fff4603ffe1f,PodSandboxId:24021bdf4ca6509fd90d9b58cb2b724be60be9a1ffb3285df8076ca2168b7feb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391039092866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2zlww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
eb78763-7160-4ae9-80c3-87a82a6dc992,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c364c544d7d7f031ff432964306d762430675bdcc271d3bab8e950e2a8f7fc28,PodSandboxId:e7ef26e4c8f0c653bd9881855e65395bf79dcaf6fdaddeae4dc5e946c60146a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726859390740091257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-whcbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2dbb60-1a51-4874-98b8-75d1a35b0512,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ccd1fc6b8f8df2809889e166b96a1b6c2f3430bc180ebf5a173b2821961e388,PodSandboxId:6d28730d9d07ce3c7619c35ffb691940a085f62bcb611da954fd68df457e9c5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859379711547432,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84766edb8659eb295c2d46988cdb09d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf43e654caeb5f98cdf5e1da55898b1b9522078243dc9fc219566b5edb16a0f8,PodSandboxId:089d0106a913c501ade84880df8565badec9254b77b75dde21d58d61626d8eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859379751370179,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509a6ad1e89e1bc816872362edf1d642,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cf7beb540ba5cb3ff61d3baf08eafacc28e1e1c95c33cb69a1032e335f1ed53,PodSandboxId:b29721bb4db6b41c0e4c1131aa8113f2e97ae8410a53507d8263ca02504b281f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859379726914209,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb06bd75ec7169fc5a461bfe6e3646d39d5a4ee55e2ccaa859a7ed4a2d4a2f0,PodSandboxId:b6cdc4bbe6592859f144838f4dcf92504c58b9608f3a27516c156df6414d15d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859379680697813,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c4226e1838e7a3ea47eacc9d8a2390,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3a1740b-3c0d-47a5-990b-37b7eba26fbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.122392337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8f62b69-bd15-463b-8a69-aa90f1d9bd62 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.122515824Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8f62b69-bd15-463b-8a69-aa90f1d9bd62 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:19:01 embed-certs-339897 crio[711]: time="2024-09-20 19:19:01.122883779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2723b5d731b534edb71c51be36dec0081571147745dfe4442c1ee88181556806,PodSandboxId:d2208cdae4af7ced77ea12dbd1e3947a0da5374812086aa97b51738d6b35e3df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859392193511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bcc482a-6905-436a-8d90-7eee9ba18f8b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fd0790b5de170f5af7f48ee38678a47bf873c54e6758b49e81ce86fbe9611,PodSandboxId:8ac9258f316c5409c0553943663ab92d1b23c2be46e527c2a3eae7f1c2acc8c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391469683495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a441e8-39b0-4623-a7bd-eebbd1574f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4018f260defa1395c4657499327bc4925863a9f2b0cb4ca64a1fff4603ffe1f,PodSandboxId:24021bdf4ca6509fd90d9b58cb2b724be60be9a1ffb3285df8076ca2168b7feb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391039092866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2zlww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
eb78763-7160-4ae9-80c3-87a82a6dc992,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c364c544d7d7f031ff432964306d762430675bdcc271d3bab8e950e2a8f7fc28,PodSandboxId:e7ef26e4c8f0c653bd9881855e65395bf79dcaf6fdaddeae4dc5e946c60146a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726859390740091257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-whcbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2dbb60-1a51-4874-98b8-75d1a35b0512,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ccd1fc6b8f8df2809889e166b96a1b6c2f3430bc180ebf5a173b2821961e388,PodSandboxId:6d28730d9d07ce3c7619c35ffb691940a085f62bcb611da954fd68df457e9c5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859379711547432,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84766edb8659eb295c2d46988cdb09d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf43e654caeb5f98cdf5e1da55898b1b9522078243dc9fc219566b5edb16a0f8,PodSandboxId:089d0106a913c501ade84880df8565badec9254b77b75dde21d58d61626d8eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859379751370179,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509a6ad1e89e1bc816872362edf1d642,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cf7beb540ba5cb3ff61d3baf08eafacc28e1e1c95c33cb69a1032e335f1ed53,PodSandboxId:b29721bb4db6b41c0e4c1131aa8113f2e97ae8410a53507d8263ca02504b281f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859379726914209,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb06bd75ec7169fc5a461bfe6e3646d39d5a4ee55e2ccaa859a7ed4a2d4a2f0,PodSandboxId:b6cdc4bbe6592859f144838f4dcf92504c58b9608f3a27516c156df6414d15d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859379680697813,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c4226e1838e7a3ea47eacc9d8a2390,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aca960651b40b7a4edbec39fb2f9680b94b5bf9052d8e236bcb33f39f501413,PodSandboxId:662adc8c49ec5df33d030dc501e3b781286be011fd95b47104498c6591e70ef1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859092343694002,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a8f62b69-bd15-463b-8a69-aa90f1d9bd62 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2723b5d731b53       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   d2208cdae4af7       storage-provisioner
	9f0fd0790b5de       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   8ac9258f316c5       coredns-7c65d6cfc9-7fxdr
	d4018f260defa       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   24021bdf4ca65       coredns-7c65d6cfc9-2zlww
	c364c544d7d7f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   e7ef26e4c8f0c       kube-proxy-whcbh
	cf43e654caeb5       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   089d0106a913c       kube-controller-manager-embed-certs-339897
	0cf7beb540ba5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   b29721bb4db6b       kube-apiserver-embed-certs-339897
	5ccd1fc6b8f8d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   6d28730d9d07c       etcd-embed-certs-339897
	2eb06bd75ec71       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   b6cdc4bbe6592       kube-scheduler-embed-certs-339897
	9aca960651b40       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   662adc8c49ec5       kube-apiserver-embed-certs-339897
	
	
	==> coredns [9f0fd0790b5de170f5af7f48ee38678a47bf873c54e6758b49e81ce86fbe9611] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d4018f260defa1395c4657499327bc4925863a9f2b0cb4ca64a1fff4603ffe1f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-339897
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-339897
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=embed-certs-339897
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T19_09_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:09:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-339897
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:18:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:15:01 +0000   Fri, 20 Sep 2024 19:09:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:15:01 +0000   Fri, 20 Sep 2024 19:09:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:15:01 +0000   Fri, 20 Sep 2024 19:09:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:15:01 +0000   Fri, 20 Sep 2024 19:09:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.72
	  Hostname:    embed-certs-339897
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 20e655c3886546f3b3a07c10a5b65d8e
	  System UUID:                20e655c3-8865-46f3-b3a0-7c10a5b65d8e
	  Boot ID:                    20c108b9-0be7-4e19-94a8-dadaa6f487ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-2zlww                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 coredns-7c65d6cfc9-7fxdr                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 etcd-embed-certs-339897                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-embed-certs-339897             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-controller-manager-embed-certs-339897    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-proxy-whcbh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-scheduler-embed-certs-339897             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 metrics-server-6867b74b74-tw9fh               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m9s   kube-proxy       
	  Normal  Starting                 9m16s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s  kubelet          Node embed-certs-339897 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s  kubelet          Node embed-certs-339897 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s  kubelet          Node embed-certs-339897 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s  node-controller  Node embed-certs-339897 event: Registered Node embed-certs-339897 in Controller
	
	
	==> dmesg <==
	[  +0.051132] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037637] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.778132] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.874331] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.537649] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.183454] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.057271] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053305] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.164608] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.145583] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.293758] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +4.053385] systemd-fstab-generator[793]: Ignoring "noauto" option for root device
	[  +2.012710] systemd-fstab-generator[913]: Ignoring "noauto" option for root device
	[  +0.060624] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.524096] kauditd_printk_skb: 69 callbacks suppressed
	[Sep20 19:05] kauditd_printk_skb: 90 callbacks suppressed
	[Sep20 19:09] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.997588] systemd-fstab-generator[2582]: Ignoring "noauto" option for root device
	[  +4.842166] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.735040] systemd-fstab-generator[2904]: Ignoring "noauto" option for root device
	[  +5.414043] systemd-fstab-generator[3030]: Ignoring "noauto" option for root device
	[  +0.112090] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.168313] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [5ccd1fc6b8f8df2809889e166b96a1b6c2f3430bc180ebf5a173b2821961e388] <==
	{"level":"info","ts":"2024-09-20T19:09:40.183085Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T19:09:40.185942Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.72:2380"}
	{"level":"info","ts":"2024-09-20T19:09:40.186165Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.72:2380"}
	{"level":"info","ts":"2024-09-20T19:09:40.189472Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"69bbf9a7ee633bdb","initial-advertise-peer-urls":["https://192.168.72.72:2380"],"listen-peer-urls":["https://192.168.72.72:2380"],"advertise-client-urls":["https://192.168.72.72:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.72:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T19:09:40.189569Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T19:09:40.327006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69bbf9a7ee633bdb is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T19:09:40.327224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69bbf9a7ee633bdb became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T19:09:40.327364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69bbf9a7ee633bdb received MsgPreVoteResp from 69bbf9a7ee633bdb at term 1"}
	{"level":"info","ts":"2024-09-20T19:09:40.327476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69bbf9a7ee633bdb became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T19:09:40.327549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69bbf9a7ee633bdb received MsgVoteResp from 69bbf9a7ee633bdb at term 2"}
	{"level":"info","ts":"2024-09-20T19:09:40.327627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69bbf9a7ee633bdb became leader at term 2"}
	{"level":"info","ts":"2024-09-20T19:09:40.327665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 69bbf9a7ee633bdb elected leader 69bbf9a7ee633bdb at term 2"}
	{"level":"info","ts":"2024-09-20T19:09:40.332258Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"69bbf9a7ee633bdb","local-member-attributes":"{Name:embed-certs-339897 ClientURLs:[https://192.168.72.72:2379]}","request-path":"/0/members/69bbf9a7ee633bdb/attributes","cluster-id":"57220485084312a4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T19:09:40.332646Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:09:40.336014Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T19:09:40.336086Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T19:09:40.332665Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:09:40.334014Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:09:40.342084Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"57220485084312a4","local-member-id":"69bbf9a7ee633bdb","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:09:40.342216Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:09:40.342276Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:09:40.344987Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:09:40.345867Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.72:2379"}
	{"level":"info","ts":"2024-09-20T19:09:40.348448Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:09:40.357156Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:19:01 up 14 min,  0 users,  load average: 0.28, 0.26, 0.18
	Linux embed-certs-339897 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0cf7beb540ba5cb3ff61d3baf08eafacc28e1e1c95c33cb69a1032e335f1ed53] <==
	W0920 19:14:43.300188       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:14:43.300297       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 19:14:43.301207       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:14:43.302332       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 19:15:43.301791       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:15:43.302078       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 19:15:43.303075       1 handler_proxy.go:99] no RequestInfo found in the context
	I0920 19:15:43.303127       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0920 19:15:43.303228       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 19:15:43.305133       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 19:17:43.304054       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:17:43.304390       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0920 19:17:43.306213       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 19:17:43.306488       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:17:43.306720       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 19:17:43.307973       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [9aca960651b40b7a4edbec39fb2f9680b94b5bf9052d8e236bcb33f39f501413] <==
	W0920 19:09:32.058373       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.116763       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.128470       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.137629       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.139192       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.140615       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.175765       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.187451       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.271434       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.306536       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.324028       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.464831       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.486064       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.489486       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.512273       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.538514       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.556987       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.618785       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.650551       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.679872       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.731762       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.788000       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.827884       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.845611       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:34.422263       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [cf43e654caeb5f98cdf5e1da55898b1b9522078243dc9fc219566b5edb16a0f8] <==
	E0920 19:13:49.175412       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:13:49.702773       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:14:19.188172       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:14:19.711347       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:14:49.195589       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:14:49.724314       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 19:15:01.020612       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-339897"
	E0920 19:15:19.202274       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:15:19.733199       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 19:15:43.136978       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="323.693µs"
	E0920 19:15:49.209158       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:15:49.741782       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 19:15:57.141859       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="153.793µs"
	E0920 19:16:19.216481       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:16:19.750321       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:16:49.222656       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:16:49.769136       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:17:19.229444       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:17:19.777964       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:17:49.236341       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:17:49.786309       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:18:19.243610       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:18:19.797301       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:18:49.250380       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:18:49.806594       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c364c544d7d7f031ff432964306d762430675bdcc271d3bab8e950e2a8f7fc28] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 19:09:51.538422       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 19:09:51.567025       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.72"]
	E0920 19:09:51.567115       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:09:51.635526       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 19:09:51.635583       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 19:09:51.635615       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:09:51.647611       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:09:51.648461       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:09:51.648854       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:09:51.655384       1 config.go:199] "Starting service config controller"
	I0920 19:09:51.663791       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:09:51.663873       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:09:51.663881       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:09:51.689340       1 config.go:328] "Starting node config controller"
	I0920 19:09:51.696202       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:09:51.767846       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 19:09:51.768004       1 shared_informer.go:320] Caches are synced for service config
	I0920 19:09:51.796521       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2eb06bd75ec7169fc5a461bfe6e3646d39d5a4ee55e2ccaa859a7ed4a2d4a2f0] <==
	W0920 19:09:43.280942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 19:09:43.281083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.291127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 19:09:43.291173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.331527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 19:09:43.331645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.374740       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 19:09:43.374877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.415621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 19:09:43.415723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.451776       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 19:09:43.451928       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 19:09:43.477524       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 19:09:43.477738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.499767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 19:09:43.500002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.527109       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 19:09:43.527239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.635036       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 19:09:43.635085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.648725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 19:09:43.648790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.696301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 19:09:43.696518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 19:09:46.712130       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:17:45 embed-certs-339897 kubelet[2911]: E0920 19:17:45.274423    2911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859865274121323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:17:55 embed-certs-339897 kubelet[2911]: E0920 19:17:55.276087    2911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859875275571734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:17:55 embed-certs-339897 kubelet[2911]: E0920 19:17:55.276156    2911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859875275571734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:17:58 embed-certs-339897 kubelet[2911]: E0920 19:17:58.119273    2911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tw9fh" podUID="8366591d-8916-4b9f-be8a-64ddc185f576"
	Sep 20 19:18:05 embed-certs-339897 kubelet[2911]: E0920 19:18:05.277418    2911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859885277201540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:05 embed-certs-339897 kubelet[2911]: E0920 19:18:05.277461    2911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859885277201540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:12 embed-certs-339897 kubelet[2911]: E0920 19:18:12.118627    2911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tw9fh" podUID="8366591d-8916-4b9f-be8a-64ddc185f576"
	Sep 20 19:18:15 embed-certs-339897 kubelet[2911]: E0920 19:18:15.279831    2911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859895278999233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:15 embed-certs-339897 kubelet[2911]: E0920 19:18:15.280093    2911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859895278999233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:24 embed-certs-339897 kubelet[2911]: E0920 19:18:24.119080    2911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tw9fh" podUID="8366591d-8916-4b9f-be8a-64ddc185f576"
	Sep 20 19:18:25 embed-certs-339897 kubelet[2911]: E0920 19:18:25.281284    2911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859905280939564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:25 embed-certs-339897 kubelet[2911]: E0920 19:18:25.281579    2911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859905280939564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:35 embed-certs-339897 kubelet[2911]: E0920 19:18:35.283715    2911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859915282836199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:35 embed-certs-339897 kubelet[2911]: E0920 19:18:35.284037    2911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859915282836199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:36 embed-certs-339897 kubelet[2911]: E0920 19:18:36.118999    2911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tw9fh" podUID="8366591d-8916-4b9f-be8a-64ddc185f576"
	Sep 20 19:18:45 embed-certs-339897 kubelet[2911]: E0920 19:18:45.154762    2911 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 19:18:45 embed-certs-339897 kubelet[2911]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 19:18:45 embed-certs-339897 kubelet[2911]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 19:18:45 embed-certs-339897 kubelet[2911]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 19:18:45 embed-certs-339897 kubelet[2911]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 19:18:45 embed-certs-339897 kubelet[2911]: E0920 19:18:45.286074    2911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859925285578486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:45 embed-certs-339897 kubelet[2911]: E0920 19:18:45.286102    2911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859925285578486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:51 embed-certs-339897 kubelet[2911]: E0920 19:18:51.118823    2911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tw9fh" podUID="8366591d-8916-4b9f-be8a-64ddc185f576"
	Sep 20 19:18:55 embed-certs-339897 kubelet[2911]: E0920 19:18:55.287799    2911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859935287465709,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:18:55 embed-certs-339897 kubelet[2911]: E0920 19:18:55.287850    2911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859935287465709,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [2723b5d731b534edb71c51be36dec0081571147745dfe4442c1ee88181556806] <==
	I0920 19:09:52.411423       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 19:09:52.431962       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 19:09:52.432352       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 19:09:52.452466       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 19:09:52.452685       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-339897_c64a37ba-95c8-4532-96d1-45c90f750de0!
	I0920 19:09:52.454711       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5fbf437-d763-4ef8-97ec-738d2b6a87d2", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-339897_c64a37ba-95c8-4532-96d1-45c90f750de0 became leader
	I0920 19:09:52.552922       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-339897_c64a37ba-95c8-4532-96d1-45c90f750de0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-339897 -n embed-certs-339897
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-339897 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-tw9fh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-339897 describe pod metrics-server-6867b74b74-tw9fh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-339897 describe pod metrics-server-6867b74b74-tw9fh: exit status 1 (67.665301ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-tw9fh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-339897 describe pod metrics-server-6867b74b74-tw9fh: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0920 19:12:29.486857  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:13:04.096056  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-037711 -n no-preload-037711
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-20 19:20:10.996568003 +0000 UTC m=+6273.657014650
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-037711 -n no-preload-037711
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-037711 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-037711 logs -n 25: (2.21547249s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-793540 sudo cat                             | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo find                            | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo crio                            | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-793540                                      | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-896665 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | disable-driver-mounts-896665                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:57 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-037711             | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-339897            | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-612312  | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-037711                  | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC | 20 Sep 24 19:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-339897                 | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-425599        | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612312       | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-425599             | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:01:28
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:01:28.948776  303486 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:01:28.948894  303486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:01:28.948900  303486 out.go:358] Setting ErrFile to fd 2...
	I0920 19:01:28.948906  303486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:01:28.949090  303486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 19:01:28.949637  303486 out.go:352] Setting JSON to false
	I0920 19:01:28.950705  303486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9832,"bootTime":1726849057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:01:28.950809  303486 start.go:139] virtualization: kvm guest
	I0920 19:01:28.953226  303486 out.go:177] * [old-k8s-version-425599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:01:28.955013  303486 notify.go:220] Checking for updates...
	I0920 19:01:28.955065  303486 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 19:01:28.956932  303486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:01:28.959076  303486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:01:28.961116  303486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 19:01:28.963396  303486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:01:28.965428  303486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:01:28.967688  303486 config.go:182] Loaded profile config "old-k8s-version-425599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:01:28.968112  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:01:28.968175  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:01:28.984002  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
	I0920 19:01:28.984552  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:01:28.985260  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:01:28.985291  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:01:28.985715  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:01:28.985972  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:01:28.988070  303486 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 19:01:28.989565  303486 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:01:28.990007  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:01:28.990079  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:01:29.006020  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34059
	I0920 19:01:29.006492  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:01:29.007046  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:01:29.007078  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:01:29.007441  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:01:29.007706  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:01:29.049785  303486 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 19:01:29.051185  303486 start.go:297] selected driver: kvm2
	I0920 19:01:29.051206  303486 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:01:29.051323  303486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:01:29.052030  303486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:01:29.052131  303486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:01:29.068826  303486 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:01:29.069232  303486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:01:29.069262  303486 cni.go:84] Creating CNI manager for ""
	I0920 19:01:29.069297  303486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:01:29.069333  303486 start.go:340] cluster config:
	{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:01:29.069439  303486 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:01:29.071617  303486 out.go:177] * Starting "old-k8s-version-425599" primary control-plane node in "old-k8s-version-425599" cluster
	I0920 19:01:27.086248  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:29.073133  303486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:01:29.073174  303486 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 19:01:29.073182  303486 cache.go:56] Caching tarball of preloaded images
	I0920 19:01:29.073269  303486 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:01:29.073285  303486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 19:01:29.073388  303486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 19:01:29.073573  303486 start.go:360] acquireMachinesLock for old-k8s-version-425599: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:01:33.166258  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:36.238261  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:42.318195  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:45.390223  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:51.470272  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:54.542277  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:00.622232  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:03.694275  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:09.774241  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:12.846248  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:18.926213  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:21.998195  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:28.078192  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:31.150239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:37.230160  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:40.302224  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:46.382225  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:49.454205  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:55.534186  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:58.606232  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:04.686254  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:07.758234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:13.838239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:16.910321  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:22.990234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:26.062339  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:32.142210  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:35.214256  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:41.294234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:44.366288  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:50.446215  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:53.518266  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:59.598190  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:02.670240  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:08.750179  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:11.822239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:17.902176  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:20.974235  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:23.977804  302869 start.go:364] duration metric: took 4m19.519175605s to acquireMachinesLock for "embed-certs-339897"
	I0920 19:04:23.977868  302869 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:04:23.977876  302869 fix.go:54] fixHost starting: 
	I0920 19:04:23.978233  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:04:23.978277  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:04:23.993804  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0920 19:04:23.994326  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:04:23.994906  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:04:23.994925  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:04:23.995219  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:04:23.995413  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:23.995575  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:04:23.997417  302869 fix.go:112] recreateIfNeeded on embed-certs-339897: state=Stopped err=<nil>
	I0920 19:04:23.997439  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	W0920 19:04:23.997636  302869 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:04:24.001021  302869 out.go:177] * Restarting existing kvm2 VM for "embed-certs-339897" ...
	I0920 19:04:24.002636  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Start
	I0920 19:04:24.002842  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring networks are active...
	I0920 19:04:24.003916  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring network default is active
	I0920 19:04:24.004282  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring network mk-embed-certs-339897 is active
	I0920 19:04:24.004647  302869 main.go:141] libmachine: (embed-certs-339897) Getting domain xml...
	I0920 19:04:24.005446  302869 main.go:141] libmachine: (embed-certs-339897) Creating domain...
	I0920 19:04:23.975096  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:04:23.975155  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:04:23.975457  302538 buildroot.go:166] provisioning hostname "no-preload-037711"
	I0920 19:04:23.975485  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:04:23.975712  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:04:23.977607  302538 machine.go:96] duration metric: took 4m37.412034117s to provisionDockerMachine
	I0920 19:04:23.977703  302538 fix.go:56] duration metric: took 4m37.437032108s for fixHost
	I0920 19:04:23.977718  302538 start.go:83] releasing machines lock for "no-preload-037711", held for 4m37.437081737s
	W0920 19:04:23.977745  302538 start.go:714] error starting host: provision: host is not running
	W0920 19:04:23.977850  302538 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 19:04:23.977859  302538 start.go:729] Will try again in 5 seconds ...
	I0920 19:04:25.258221  302869 main.go:141] libmachine: (embed-certs-339897) Waiting to get IP...
	I0920 19:04:25.259119  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.259493  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.259584  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.259481  304091 retry.go:31] will retry after 212.462393ms: waiting for machine to come up
	I0920 19:04:25.474057  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.474524  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.474564  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.474441  304091 retry.go:31] will retry after 306.01691ms: waiting for machine to come up
	I0920 19:04:25.782264  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.782729  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.782753  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.782706  304091 retry.go:31] will retry after 416.637796ms: waiting for machine to come up
	I0920 19:04:26.201336  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:26.201704  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:26.201738  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:26.201645  304091 retry.go:31] will retry after 583.373452ms: waiting for machine to come up
	I0920 19:04:26.786448  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:26.786854  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:26.786876  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:26.786807  304091 retry.go:31] will retry after 760.706965ms: waiting for machine to come up
	I0920 19:04:27.548786  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:27.549126  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:27.549149  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:27.549088  304091 retry.go:31] will retry after 615.829194ms: waiting for machine to come up
	I0920 19:04:28.167061  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:28.167601  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:28.167647  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:28.167419  304091 retry.go:31] will retry after 786.700064ms: waiting for machine to come up
	I0920 19:04:28.955294  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:28.955658  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:28.955685  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:28.955611  304091 retry.go:31] will retry after 1.309567829s: waiting for machine to come up
	I0920 19:04:28.979506  302538 start.go:360] acquireMachinesLock for no-preload-037711: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:04:30.267104  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:30.267645  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:30.267676  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:30.267583  304091 retry.go:31] will retry after 1.153396834s: waiting for machine to come up
	I0920 19:04:31.423030  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:31.423604  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:31.423629  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:31.423542  304091 retry.go:31] will retry after 1.858288741s: waiting for machine to come up
	I0920 19:04:33.284886  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:33.285381  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:33.285419  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:33.285334  304091 retry.go:31] will retry after 2.343802005s: waiting for machine to come up
	I0920 19:04:35.630962  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:35.631408  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:35.631439  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:35.631359  304091 retry.go:31] will retry after 2.42254126s: waiting for machine to come up
	I0920 19:04:38.055128  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:38.055796  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:38.055854  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:38.055732  304091 retry.go:31] will retry after 3.877296828s: waiting for machine to come up
	I0920 19:04:43.362725  303063 start.go:364] duration metric: took 4m20.211671699s to acquireMachinesLock for "default-k8s-diff-port-612312"
	I0920 19:04:43.362794  303063 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:04:43.362810  303063 fix.go:54] fixHost starting: 
	I0920 19:04:43.363257  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:04:43.363315  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:04:43.380877  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34751
	I0920 19:04:43.381399  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:04:43.381894  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:04:43.381933  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:04:43.382364  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:04:43.382596  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:04:43.382746  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:04:43.384351  303063 fix.go:112] recreateIfNeeded on default-k8s-diff-port-612312: state=Stopped err=<nil>
	I0920 19:04:43.384379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	W0920 19:04:43.384540  303063 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:04:43.386969  303063 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-612312" ...
	I0920 19:04:41.936215  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.936789  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has current primary IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.936811  302869 main.go:141] libmachine: (embed-certs-339897) Found IP for machine: 192.168.72.72
	I0920 19:04:41.936823  302869 main.go:141] libmachine: (embed-certs-339897) Reserving static IP address...
	I0920 19:04:41.937386  302869 main.go:141] libmachine: (embed-certs-339897) Reserved static IP address: 192.168.72.72
	I0920 19:04:41.937412  302869 main.go:141] libmachine: (embed-certs-339897) Waiting for SSH to be available...
	I0920 19:04:41.937435  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "embed-certs-339897", mac: "52:54:00:dc:b1:41", ip: "192.168.72.72"} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:41.937466  302869 main.go:141] libmachine: (embed-certs-339897) DBG | skip adding static IP to network mk-embed-certs-339897 - found existing host DHCP lease matching {name: "embed-certs-339897", mac: "52:54:00:dc:b1:41", ip: "192.168.72.72"}
	I0920 19:04:41.937481  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Getting to WaitForSSH function...
	I0920 19:04:41.939673  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.940065  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:41.940089  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.940196  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Using SSH client type: external
	I0920 19:04:41.940223  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa (-rw-------)
	I0920 19:04:41.940261  302869 main.go:141] libmachine: (embed-certs-339897) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:04:41.940274  302869 main.go:141] libmachine: (embed-certs-339897) DBG | About to run SSH command:
	I0920 19:04:41.940285  302869 main.go:141] libmachine: (embed-certs-339897) DBG | exit 0
	I0920 19:04:42.065967  302869 main.go:141] libmachine: (embed-certs-339897) DBG | SSH cmd err, output: <nil>: 
	I0920 19:04:42.066357  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetConfigRaw
	I0920 19:04:42.067004  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:42.069586  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.069937  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.069968  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.070208  302869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/config.json ...
	I0920 19:04:42.070452  302869 machine.go:93] provisionDockerMachine start ...
	I0920 19:04:42.070478  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:42.070687  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.072878  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.073340  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.073375  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.073501  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.073701  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.073899  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.074080  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.074254  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.074504  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.074523  302869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:04:42.182250  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:04:42.182287  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.182543  302869 buildroot.go:166] provisioning hostname "embed-certs-339897"
	I0920 19:04:42.182570  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.182818  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.185497  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.185850  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.185886  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.186069  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.186274  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.186421  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.186568  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.186770  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.186986  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.187006  302869 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-339897 && echo "embed-certs-339897" | sudo tee /etc/hostname
	I0920 19:04:42.307656  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-339897
	
	I0920 19:04:42.307700  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.310572  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.310943  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.310970  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.311184  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.311382  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.311534  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.311663  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.311810  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.311984  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.312003  302869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-339897' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-339897/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-339897' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:04:42.426403  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:04:42.426440  302869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:04:42.426493  302869 buildroot.go:174] setting up certificates
	I0920 19:04:42.426502  302869 provision.go:84] configureAuth start
	I0920 19:04:42.426513  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.426822  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:42.429708  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.430134  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.430170  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.430328  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.432799  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.433222  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.433251  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.433383  302869 provision.go:143] copyHostCerts
	I0920 19:04:42.433466  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:04:42.433487  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:04:42.433549  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:04:42.433644  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:04:42.433652  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:04:42.433678  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:04:42.433735  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:04:42.433742  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:04:42.433762  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:04:42.433811  302869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.embed-certs-339897 san=[127.0.0.1 192.168.72.72 embed-certs-339897 localhost minikube]
	I0920 19:04:42.745528  302869 provision.go:177] copyRemoteCerts
	I0920 19:04:42.745599  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:04:42.745633  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.748247  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.748587  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.748619  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.748811  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.749014  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.749201  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.749334  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:42.831927  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:04:42.855674  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:04:42.879114  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 19:04:42.902982  302869 provision.go:87] duration metric: took 476.462339ms to configureAuth
	I0920 19:04:42.903019  302869 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:04:42.903236  302869 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:04:42.903321  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.906208  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.906580  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.906613  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.906810  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.907006  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.907136  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.907263  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.907427  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.907601  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.907616  302869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:04:43.127800  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:04:43.127847  302869 machine.go:96] duration metric: took 1.057372659s to provisionDockerMachine
	I0920 19:04:43.127864  302869 start.go:293] postStartSetup for "embed-certs-339897" (driver="kvm2")
	I0920 19:04:43.127890  302869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:04:43.127917  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.128263  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:04:43.128298  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.131648  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.132138  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.132173  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.132340  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.132560  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.132739  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.132896  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.216646  302869 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:04:43.220513  302869 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:04:43.220548  302869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:04:43.220629  302869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:04:43.220734  302869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:04:43.220862  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:04:43.230506  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:04:43.252894  302869 start.go:296] duration metric: took 125.003067ms for postStartSetup
	I0920 19:04:43.252943  302869 fix.go:56] duration metric: took 19.275066559s for fixHost
	I0920 19:04:43.252971  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.255999  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.256378  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.256406  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.256634  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.256858  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.257047  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.257214  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.257382  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:43.257546  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:43.257556  302869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:04:43.362516  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859083.339291891
	
	I0920 19:04:43.362545  302869 fix.go:216] guest clock: 1726859083.339291891
	I0920 19:04:43.362553  302869 fix.go:229] Guest: 2024-09-20 19:04:43.339291891 +0000 UTC Remote: 2024-09-20 19:04:43.25294824 +0000 UTC m=+278.942139838 (delta=86.343651ms)
	I0920 19:04:43.362585  302869 fix.go:200] guest clock delta is within tolerance: 86.343651ms
	I0920 19:04:43.362591  302869 start.go:83] releasing machines lock for "embed-certs-339897", held for 19.38474105s
	I0920 19:04:43.362620  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.362970  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:43.365988  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.366359  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.366380  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.366610  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367130  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367326  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367423  302869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:04:43.367469  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.367602  302869 ssh_runner.go:195] Run: cat /version.json
	I0920 19:04:43.367628  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.370233  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370594  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.370624  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370649  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370804  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.370998  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.371169  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.371191  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.371249  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.371406  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.371470  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.371566  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.371721  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.371885  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.490023  302869 ssh_runner.go:195] Run: systemctl --version
	I0920 19:04:43.496615  302869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:04:43.643493  302869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:04:43.649492  302869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:04:43.649560  302869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:04:43.665423  302869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:04:43.665460  302869 start.go:495] detecting cgroup driver to use...
	I0920 19:04:43.665530  302869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:04:43.681288  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:04:43.695161  302869 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:04:43.695218  302869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:04:43.708772  302869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:04:43.722803  302869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:04:43.834054  302869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:04:43.966014  302869 docker.go:233] disabling docker service ...
	I0920 19:04:43.966102  302869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:04:43.982324  302869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:04:43.995351  302869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:04:44.135635  302869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:04:44.262661  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:04:44.277377  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:04:44.299889  302869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:04:44.299965  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.312434  302869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:04:44.312534  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.323052  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.333504  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.343704  302869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:04:44.354386  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.364308  302869 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.383581  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.395013  302869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:04:44.405227  302869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:04:44.405279  302869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:04:44.418685  302869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:04:44.431323  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:04:44.558582  302869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:04:44.644003  302869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:04:44.644091  302869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:04:44.649434  302869 start.go:563] Will wait 60s for crictl version
	I0920 19:04:44.649498  302869 ssh_runner.go:195] Run: which crictl
	I0920 19:04:44.653334  302869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:04:44.695896  302869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:04:44.696004  302869 ssh_runner.go:195] Run: crio --version
	I0920 19:04:44.726148  302869 ssh_runner.go:195] Run: crio --version
	I0920 19:04:44.757340  302869 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:04:43.388378  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Start
	I0920 19:04:43.388603  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring networks are active...
	I0920 19:04:43.389387  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring network default is active
	I0920 19:04:43.389863  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring network mk-default-k8s-diff-port-612312 is active
	I0920 19:04:43.390364  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Getting domain xml...
	I0920 19:04:43.391121  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Creating domain...
	I0920 19:04:44.718004  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting to get IP...
	I0920 19:04:44.718885  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.719317  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.719413  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:44.719288  304227 retry.go:31] will retry after 197.63251ms: waiting for machine to come up
	I0920 19:04:44.919026  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.919516  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.919547  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:44.919475  304227 retry.go:31] will retry after 305.409091ms: waiting for machine to come up
	I0920 19:04:45.227550  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.228191  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.228224  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:45.228147  304227 retry.go:31] will retry after 311.72219ms: waiting for machine to come up
	I0920 19:04:45.541945  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.542374  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.542403  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:45.542344  304227 retry.go:31] will retry after 547.369471ms: waiting for machine to come up
	I0920 19:04:46.091199  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.091731  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.091765  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:46.091693  304227 retry.go:31] will retry after 519.190971ms: waiting for machine to come up
	I0920 19:04:46.612175  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.612641  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.612672  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:46.612591  304227 retry.go:31] will retry after 715.908704ms: waiting for machine to come up
	I0920 19:04:47.330911  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:47.331350  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:47.331379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:47.331294  304227 retry.go:31] will retry after 898.358136ms: waiting for machine to come up
	I0920 19:04:44.759090  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:44.762331  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:44.762696  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:44.762728  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:44.762954  302869 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 19:04:44.767209  302869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:04:44.781327  302869 kubeadm.go:883] updating cluster {Name:embed-certs-339897 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:04:44.781465  302869 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:04:44.781512  302869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:04:44.817356  302869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:04:44.817422  302869 ssh_runner.go:195] Run: which lz4
	I0920 19:04:44.821534  302869 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:04:44.826169  302869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:04:44.826205  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 19:04:46.160290  302869 crio.go:462] duration metric: took 1.338795677s to copy over tarball
	I0920 19:04:46.160379  302869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:04:48.265535  302869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.105118482s)
	I0920 19:04:48.265580  302869 crio.go:469] duration metric: took 2.105250135s to extract the tarball
	I0920 19:04:48.265588  302869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:04:48.302529  302869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:04:48.346391  302869 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:04:48.346419  302869 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:04:48.346427  302869 kubeadm.go:934] updating node { 192.168.72.72 8443 v1.31.1 crio true true} ...
	I0920 19:04:48.346566  302869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-339897 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:04:48.346668  302869 ssh_runner.go:195] Run: crio config
	I0920 19:04:48.396798  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:04:48.396824  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:04:48.396834  302869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:04:48.396866  302869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.72 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-339897 NodeName:embed-certs-339897 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:04:48.397043  302869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-339897"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:04:48.397121  302869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:04:48.407031  302869 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:04:48.407118  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:04:48.416554  302869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 19:04:48.432540  302869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:04:48.448042  302869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0920 19:04:48.465193  302869 ssh_runner.go:195] Run: grep 192.168.72.72	control-plane.minikube.internal$ /etc/hosts
	I0920 19:04:48.469083  302869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:04:48.481123  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:04:48.609883  302869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:04:48.627512  302869 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897 for IP: 192.168.72.72
	I0920 19:04:48.627545  302869 certs.go:194] generating shared ca certs ...
	I0920 19:04:48.627571  302869 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:04:48.627784  302869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:04:48.627851  302869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:04:48.627866  302869 certs.go:256] generating profile certs ...
	I0920 19:04:48.628032  302869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/client.key
	I0920 19:04:48.628143  302869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.key.308547ed
	I0920 19:04:48.628206  302869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.key
	I0920 19:04:48.628375  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:04:48.628421  302869 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:04:48.628435  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:04:48.628470  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:04:48.628509  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:04:48.628542  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:04:48.628616  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:04:48.629569  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:04:48.656203  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:04:48.708322  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:04:48.737686  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:04:48.772198  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 19:04:48.812086  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:04:48.836038  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:04:48.859972  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:04:48.883881  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:04:48.908399  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:04:48.930787  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:04:48.954052  302869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:04:48.970257  302869 ssh_runner.go:195] Run: openssl version
	I0920 19:04:48.976072  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:04:48.986449  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.990765  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.990833  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.996437  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:04:49.007111  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:04:49.017548  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.022044  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.022108  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.027752  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:04:49.038538  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:04:49.049445  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.054018  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.054100  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.059842  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:04:49.070748  302869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:04:49.075195  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:04:49.081100  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:04:49.086844  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:04:49.092790  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:04:49.098664  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:04:49.104562  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:04:49.110818  302869 kubeadm.go:392] StartCluster: {Name:embed-certs-339897 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:04:49.110952  302869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:04:49.111003  302869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:04:49.157700  302869 cri.go:89] found id: ""
	I0920 19:04:49.157774  302869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:04:49.168314  302869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:04:49.168339  302869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:04:49.168385  302869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:04:49.178632  302869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:04:49.179681  302869 kubeconfig.go:125] found "embed-certs-339897" server: "https://192.168.72.72:8443"
	I0920 19:04:49.181624  302869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:04:49.192084  302869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.72
	I0920 19:04:49.192159  302869 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:04:49.192188  302869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:04:49.192265  302869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:04:49.229141  302869 cri.go:89] found id: ""
	I0920 19:04:49.229232  302869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:04:49.247628  302869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:04:49.258190  302869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:04:49.258211  302869 kubeadm.go:157] found existing configuration files:
	
	I0920 19:04:49.258270  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:04:49.267769  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:04:49.267837  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:04:49.277473  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:04:49.286639  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:04:49.286712  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:04:49.296295  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:04:49.305705  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:04:49.305787  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:04:49.315191  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:04:49.324206  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:04:49.324288  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:04:49.334065  302869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:04:49.344823  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:48.231405  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:48.231846  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:48.231872  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:48.231795  304227 retry.go:31] will retry after 1.105264539s: waiting for machine to come up
	I0920 19:04:49.338940  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:49.339413  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:49.339437  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:49.339366  304227 retry.go:31] will retry after 1.638536651s: waiting for machine to come up
	I0920 19:04:50.980320  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:50.980774  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:50.980805  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:50.980714  304227 retry.go:31] will retry after 2.064766522s: waiting for machine to come up
	I0920 19:04:49.450454  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.412643  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.629144  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.694547  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.756897  302869 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:04:50.757008  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:51.258120  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:51.758025  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.258040  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.757302  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.774867  302869 api_server.go:72] duration metric: took 2.017964832s to wait for apiserver process to appear ...
	I0920 19:04:52.774906  302869 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:04:52.774954  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.383214  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:04:55.383255  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:04:55.383272  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.406625  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:04:55.406660  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:04:55.775825  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.785126  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:04:55.785157  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:04:56.275864  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:56.284002  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:04:56.284032  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:04:56.775547  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:56.779999  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 200:
	ok
	I0920 19:04:56.786034  302869 api_server.go:141] control plane version: v1.31.1
	I0920 19:04:56.786066  302869 api_server.go:131] duration metric: took 4.011153019s to wait for apiserver health ...
	I0920 19:04:56.786076  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:04:56.786082  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:04:56.788195  302869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:04:53.047487  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:53.048005  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:53.048027  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:53.047958  304227 retry.go:31] will retry after 2.829648578s: waiting for machine to come up
	I0920 19:04:55.879069  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:55.879538  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:55.879562  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:55.879488  304227 retry.go:31] will retry after 3.029828813s: waiting for machine to come up
	I0920 19:04:56.789703  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:04:56.799605  302869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:04:56.816974  302869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:04:56.828470  302869 system_pods.go:59] 8 kube-system pods found
	I0920 19:04:56.828582  302869 system_pods.go:61] "coredns-7c65d6cfc9-xnfsk" [5e34a8b9-d748-484a-92ab-0d288ab5f35e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:04:56.828610  302869 system_pods.go:61] "etcd-embed-certs-339897" [1d0e8303-0ab9-418c-ba2d-f0ba33abad36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:04:56.828637  302869 system_pods.go:61] "kube-apiserver-embed-certs-339897" [35569778-54b1-456d-8822-5a53a5e336fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:04:56.828655  302869 system_pods.go:61] "kube-controller-manager-embed-certs-339897" [6b9db655-59a1-4975-b3c7-fcc29a912850] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:04:56.828677  302869 system_pods.go:61] "kube-proxy-xs4nd" [a32f4c96-ae6e-4402-89c5-0226a4412d17] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:04:56.828694  302869 system_pods.go:61] "kube-scheduler-embed-certs-339897" [81dd07df-2ba9-4f8e-bb16-263bd6496a0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:04:56.828716  302869 system_pods.go:61] "metrics-server-6867b74b74-qqhcw" [b720a331-05ef-4528-bd25-0c1e7ef66b16] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:04:56.828729  302869 system_pods.go:61] "storage-provisioner" [08674813-f61d-49e9-a714-5f38b95f058e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:04:56.828738  302869 system_pods.go:74] duration metric: took 11.732519ms to wait for pod list to return data ...
	I0920 19:04:56.828748  302869 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:04:56.835747  302869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:04:56.835786  302869 node_conditions.go:123] node cpu capacity is 2
	I0920 19:04:56.835799  302869 node_conditions.go:105] duration metric: took 7.044914ms to run NodePressure ...
	I0920 19:04:56.835822  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:57.221422  302869 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:04:57.225575  302869 kubeadm.go:739] kubelet initialised
	I0920 19:04:57.225601  302869 kubeadm.go:740] duration metric: took 4.150722ms waiting for restarted kubelet to initialise ...
	I0920 19:04:57.225610  302869 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:04:57.230469  302869 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace to be "Ready" ...
	I0920 19:04:59.237961  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:58.911412  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:58.911990  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:58.912020  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:58.911956  304227 retry.go:31] will retry after 3.428044067s: waiting for machine to come up
	I0920 19:05:02.343216  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.343633  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Found IP for machine: 192.168.50.230
	I0920 19:05:02.343668  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has current primary IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.343679  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Reserving static IP address...
	I0920 19:05:02.344038  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Reserved static IP address: 192.168.50.230
	I0920 19:05:02.344084  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-612312", mac: "52:54:00:fa:2b:63", ip: "192.168.50.230"} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.344097  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for SSH to be available...
	I0920 19:05:02.344123  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | skip adding static IP to network mk-default-k8s-diff-port-612312 - found existing host DHCP lease matching {name: "default-k8s-diff-port-612312", mac: "52:54:00:fa:2b:63", ip: "192.168.50.230"}
	I0920 19:05:02.344136  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Getting to WaitForSSH function...
	I0920 19:05:02.346591  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.346932  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.346957  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.347110  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Using SSH client type: external
	I0920 19:05:02.347157  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa (-rw-------)
	I0920 19:05:02.347194  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:02.347214  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | About to run SSH command:
	I0920 19:05:02.347227  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | exit 0
	I0920 19:05:02.474040  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:02.474475  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetConfigRaw
	I0920 19:05:02.475160  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:02.477963  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.478338  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.478361  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.478680  303063 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/config.json ...
	I0920 19:05:02.478923  303063 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:02.478949  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:02.479166  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.481380  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.481759  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.481797  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.481961  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.482149  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.482307  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.482458  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.482619  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.482883  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.482900  303063 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:02.586360  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:02.586395  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.586694  303063 buildroot.go:166] provisioning hostname "default-k8s-diff-port-612312"
	I0920 19:05:02.586720  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.586951  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.589692  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.590053  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.590080  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.590230  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.590420  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.590563  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.590722  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.590936  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.591112  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.591126  303063 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-612312 && echo "default-k8s-diff-port-612312" | sudo tee /etc/hostname
	I0920 19:05:02.707768  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-612312
	
	I0920 19:05:02.707799  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.710647  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.711035  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.711064  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.711234  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.711448  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.711622  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.711791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.711938  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.712098  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.712116  303063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-612312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-612312/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-612312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:02.828234  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:02.828274  303063 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:02.828314  303063 buildroot.go:174] setting up certificates
	I0920 19:05:02.828327  303063 provision.go:84] configureAuth start
	I0920 19:05:02.828340  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.828700  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:02.831997  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.832469  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.832503  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.832704  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.835280  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.835577  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.835608  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.835699  303063 provision.go:143] copyHostCerts
	I0920 19:05:02.835766  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:02.835787  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:02.835848  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:02.835947  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:02.835955  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:02.835975  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:02.836026  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:02.836033  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:02.836055  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:02.836103  303063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-612312 san=[127.0.0.1 192.168.50.230 default-k8s-diff-port-612312 localhost minikube]
	I0920 19:05:02.983437  303063 provision.go:177] copyRemoteCerts
	I0920 19:05:02.983510  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:02.983541  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.986435  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.986791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.986835  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.987110  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.987289  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.987438  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.987579  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.674961  303486 start.go:364] duration metric: took 3m34.601349843s to acquireMachinesLock for "old-k8s-version-425599"
	I0920 19:05:03.675039  303486 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:05:03.675048  303486 fix.go:54] fixHost starting: 
	I0920 19:05:03.675480  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:03.675541  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:03.694201  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I0920 19:05:03.694642  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:03.695198  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:05:03.695221  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:03.695609  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:03.695793  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:03.695935  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetState
	I0920 19:05:03.697838  303486 fix.go:112] recreateIfNeeded on old-k8s-version-425599: state=Stopped err=<nil>
	I0920 19:05:03.697885  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	W0920 19:05:03.698080  303486 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:05:03.700333  303486 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-425599" ...
	I0920 19:05:03.701947  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .Start
	I0920 19:05:03.702184  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring networks are active...
	I0920 19:05:03.703106  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network default is active
	I0920 19:05:03.703645  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network mk-old-k8s-version-425599 is active
	I0920 19:05:03.704152  303486 main.go:141] libmachine: (old-k8s-version-425599) Getting domain xml...
	I0920 19:05:03.704942  303486 main.go:141] libmachine: (old-k8s-version-425599) Creating domain...
	I0920 19:05:01.738488  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:03.238934  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:03.238968  302869 pod_ready.go:82] duration metric: took 6.008471722s for pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.238978  302869 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.746041  302869 pod_ready.go:93] pod "etcd-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:03.746069  302869 pod_ready.go:82] duration metric: took 507.084418ms for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.746078  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.072306  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 19:05:03.096078  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:03.122027  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:03.150314  303063 provision.go:87] duration metric: took 321.970593ms to configureAuth
	I0920 19:05:03.150345  303063 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:03.150557  303063 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:03.150650  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.153988  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.154472  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.154524  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.154631  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.154840  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.155194  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.155397  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.155741  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:03.155990  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:03.156011  303063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:03.417981  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:03.418020  303063 machine.go:96] duration metric: took 939.078754ms to provisionDockerMachine
	I0920 19:05:03.418038  303063 start.go:293] postStartSetup for "default-k8s-diff-port-612312" (driver="kvm2")
	I0920 19:05:03.418052  303063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:03.418083  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.418456  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:03.418496  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.421689  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.422245  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.422282  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.422539  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.422747  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.422991  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.423144  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.509122  303063 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:03.515233  303063 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:03.515263  303063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:03.515343  303063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:03.515441  303063 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:03.515561  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:03.529346  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:03.559267  303063 start.go:296] duration metric: took 141.209592ms for postStartSetup
	I0920 19:05:03.559320  303063 fix.go:56] duration metric: took 20.196510123s for fixHost
	I0920 19:05:03.559348  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.563599  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.564320  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.564354  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.564605  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.564917  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.565120  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.565379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.565588  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:03.565813  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:03.565827  303063 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:03.674803  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859103.651785276
	
	I0920 19:05:03.674833  303063 fix.go:216] guest clock: 1726859103.651785276
	I0920 19:05:03.674840  303063 fix.go:229] Guest: 2024-09-20 19:05:03.651785276 +0000 UTC Remote: 2024-09-20 19:05:03.559326363 +0000 UTC m=+280.560675514 (delta=92.458913ms)
	I0920 19:05:03.674862  303063 fix.go:200] guest clock delta is within tolerance: 92.458913ms
	I0920 19:05:03.674867  303063 start.go:83] releasing machines lock for "default-k8s-diff-port-612312", held for 20.312097182s
	I0920 19:05:03.674897  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.675183  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:03.677975  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.678374  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.678406  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.678552  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679080  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679255  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679380  303063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:03.679429  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.679442  303063 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:03.679472  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.682443  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.682733  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.682876  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.682902  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.683014  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.683081  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.683104  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.683222  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.683326  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.683440  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.683512  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.683634  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.683721  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.683753  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.766786  303063 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:03.806684  303063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:03.950032  303063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:03.957153  303063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:03.957230  303063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:03.976784  303063 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:03.976814  303063 start.go:495] detecting cgroup driver to use...
	I0920 19:05:03.976902  303063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:03.994391  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:04.009961  303063 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:04.010021  303063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:04.023827  303063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:04.038585  303063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:04.157489  303063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:04.320396  303063 docker.go:233] disabling docker service ...
	I0920 19:05:04.320477  303063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:04.334865  303063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:04.350776  303063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:04.469438  303063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:04.596055  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:04.610548  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:04.629128  303063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:05:04.629192  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.640211  303063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:04.640289  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.650877  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.661863  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.672695  303063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:04.684141  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.696358  303063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.714936  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.726155  303063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:04.737400  303063 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:04.737460  303063 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:04.752752  303063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:04.767664  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:04.892509  303063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:04.992361  303063 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:04.992465  303063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:04.997119  303063 start.go:563] Will wait 60s for crictl version
	I0920 19:05:04.997215  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:05:05.001132  303063 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:05.050835  303063 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:05.050955  303063 ssh_runner.go:195] Run: crio --version
	I0920 19:05:05.079870  303063 ssh_runner.go:195] Run: crio --version
	I0920 19:05:05.112325  303063 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:05:05.113600  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:05.116591  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:05.117037  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:05.117075  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:05.117334  303063 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:05.122086  303063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:05.135489  303063 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-612312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:05.135682  303063 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:05:05.135776  303063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:05.174026  303063 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:05:05.174090  303063 ssh_runner.go:195] Run: which lz4
	I0920 19:05:05.179003  303063 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:05:05.184119  303063 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:05:05.184168  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 19:05:06.479331  303063 crio.go:462] duration metric: took 1.300388015s to copy over tarball
	I0920 19:05:06.479434  303063 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:05:05.040094  303486 main.go:141] libmachine: (old-k8s-version-425599) Waiting to get IP...
	I0920 19:05:05.041198  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.041615  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.041711  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.041616  304380 retry.go:31] will retry after 264.073086ms: waiting for machine to come up
	I0920 19:05:05.307229  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.307761  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.307784  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.307713  304380 retry.go:31] will retry after 317.541552ms: waiting for machine to come up
	I0920 19:05:05.627262  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.627903  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.627929  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.627797  304380 retry.go:31] will retry after 432.236037ms: waiting for machine to come up
	I0920 19:05:06.062368  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:06.062842  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:06.062873  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:06.062804  304380 retry.go:31] will retry after 525.683807ms: waiting for machine to come up
	I0920 19:05:06.590915  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:06.591405  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:06.591434  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:06.591355  304380 retry.go:31] will retry after 542.00244ms: waiting for machine to come up
	I0920 19:05:07.135388  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:07.135944  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:07.135998  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:07.135908  304380 retry.go:31] will retry after 886.798885ms: waiting for machine to come up
	I0920 19:05:08.024147  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:08.024684  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:08.024713  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:08.024596  304380 retry.go:31] will retry after 826.869965ms: waiting for machine to come up
	I0920 19:05:08.853176  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:08.853793  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:08.853828  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:08.853736  304380 retry.go:31] will retry after 1.007422775s: waiting for machine to come up
	I0920 19:05:05.756992  302869 pod_ready.go:103] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:08.255312  302869 pod_ready.go:103] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:08.656490  303063 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1770136s)
	I0920 19:05:08.656529  303063 crio.go:469] duration metric: took 2.177156837s to extract the tarball
	I0920 19:05:08.656539  303063 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:05:08.693153  303063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:08.733444  303063 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:05:08.733473  303063 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:05:08.733484  303063 kubeadm.go:934] updating node { 192.168.50.230 8444 v1.31.1 crio true true} ...
	I0920 19:05:08.733624  303063 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-612312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:05:08.733710  303063 ssh_runner.go:195] Run: crio config
	I0920 19:05:08.777872  303063 cni.go:84] Creating CNI manager for ""
	I0920 19:05:08.777913  303063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:08.777927  303063 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:05:08.777957  303063 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.230 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-612312 NodeName:default-k8s-diff-port-612312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:05:08.778143  303063 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.230
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-612312"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:05:08.778220  303063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:05:08.788133  303063 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:05:08.788208  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:05:08.797461  303063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0920 19:05:08.814111  303063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:05:08.832188  303063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0920 19:05:08.849801  303063 ssh_runner.go:195] Run: grep 192.168.50.230	control-plane.minikube.internal$ /etc/hosts
	I0920 19:05:08.853809  303063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:08.865685  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:08.985881  303063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:09.002387  303063 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312 for IP: 192.168.50.230
	I0920 19:05:09.002417  303063 certs.go:194] generating shared ca certs ...
	I0920 19:05:09.002441  303063 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:09.002656  303063 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:05:09.002727  303063 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:05:09.002741  303063 certs.go:256] generating profile certs ...
	I0920 19:05:09.002859  303063 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/client.key
	I0920 19:05:09.002940  303063 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.key.637d18af
	I0920 19:05:09.002990  303063 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.key
	I0920 19:05:09.003207  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:05:09.003248  303063 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:05:09.003256  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:05:09.003278  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:05:09.003306  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:05:09.003328  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:05:09.003365  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:09.004030  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:05:09.037203  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:05:09.068858  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:05:09.095082  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:05:09.122167  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 19:05:09.147953  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:05:09.174251  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:05:09.202438  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:05:09.231354  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:05:09.256365  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:05:09.282589  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:05:09.308610  303063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:05:09.328798  303063 ssh_runner.go:195] Run: openssl version
	I0920 19:05:09.334685  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:05:09.345947  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.350772  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.350838  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.356595  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:05:09.367559  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:05:09.380638  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.385362  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.385429  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.391299  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:05:09.402065  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:05:09.412841  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.417074  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.417138  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.422761  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:05:09.433780  303063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:05:09.438734  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:05:09.444888  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:05:09.450715  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:05:09.456993  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:05:09.462716  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:05:09.468847  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:05:09.474680  303063 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-612312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:05:09.474780  303063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:05:09.474844  303063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:09.513886  303063 cri.go:89] found id: ""
	I0920 19:05:09.514006  303063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:09.524385  303063 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:05:09.524417  303063 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:05:09.524479  303063 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:05:09.534288  303063 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:05:09.535251  303063 kubeconfig.go:125] found "default-k8s-diff-port-612312" server: "https://192.168.50.230:8444"
	I0920 19:05:09.537293  303063 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:05:09.547753  303063 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.230
	I0920 19:05:09.547796  303063 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:05:09.547812  303063 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:05:09.547890  303063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:09.590656  303063 cri.go:89] found id: ""
	I0920 19:05:09.590743  303063 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:05:09.607426  303063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:05:09.617258  303063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:05:09.617280  303063 kubeadm.go:157] found existing configuration files:
	
	I0920 19:05:09.617344  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 19:05:09.626725  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:05:09.626813  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:05:09.636421  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 19:05:09.645711  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:05:09.645780  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:05:09.655351  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 19:05:09.664771  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:05:09.664833  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:05:09.674556  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 19:05:09.683677  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:05:09.683821  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:05:09.695159  303063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:05:09.704995  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:09.821398  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.642045  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.870266  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.935191  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:11.015669  303063 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:05:11.015787  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:11.516670  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:12.016486  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:12.516070  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:13.016012  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:13.031718  303063 api_server.go:72] duration metric: took 2.016048489s to wait for apiserver process to appear ...
	I0920 19:05:13.031752  303063 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:05:13.031781  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:13.032414  303063 api_server.go:269] stopped: https://192.168.50.230:8444/healthz: Get "https://192.168.50.230:8444/healthz": dial tcp 192.168.50.230:8444: connect: connection refused
	I0920 19:05:09.863227  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:09.863693  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:09.863721  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:09.863640  304380 retry.go:31] will retry after 1.556199895s: waiting for machine to come up
	I0920 19:05:11.422510  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:11.423244  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:11.423271  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:11.423179  304380 retry.go:31] will retry after 1.670177778s: waiting for machine to come up
	I0920 19:05:13.095982  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:13.096600  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:13.096626  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:13.096545  304380 retry.go:31] will retry after 2.71780554s: waiting for machine to come up
	I0920 19:05:10.256325  302869 pod_ready.go:93] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.256352  302869 pod_ready.go:82] duration metric: took 6.510267221s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.256361  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.263229  302869 pod_ready.go:93] pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.263254  302869 pod_ready.go:82] duration metric: took 6.886052ms for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.263264  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xs4nd" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.270014  302869 pod_ready.go:93] pod "kube-proxy-xs4nd" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.270040  302869 pod_ready.go:82] duration metric: took 6.769102ms for pod "kube-proxy-xs4nd" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.270049  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.277232  302869 pod_ready.go:93] pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.277262  302869 pod_ready.go:82] duration metric: took 7.203732ms for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.277275  302869 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:12.284083  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:14.284983  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:13.532830  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:15.579530  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:05:15.579567  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:05:15.579584  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:15.596526  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:05:15.596570  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:05:16.032011  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:16.039310  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:05:16.039346  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:05:16.531881  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:16.536703  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:05:16.536736  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:05:17.032322  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:17.036979  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 200:
	ok
	I0920 19:05:17.043667  303063 api_server.go:141] control plane version: v1.31.1
	I0920 19:05:17.043701  303063 api_server.go:131] duration metric: took 4.011936277s to wait for apiserver health ...
	I0920 19:05:17.043710  303063 cni.go:84] Creating CNI manager for ""
	I0920 19:05:17.043716  303063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:17.045376  303063 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:05:17.046579  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:05:17.056771  303063 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:05:17.076571  303063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:05:17.085546  303063 system_pods.go:59] 8 kube-system pods found
	I0920 19:05:17.085584  303063 system_pods.go:61] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:05:17.085591  303063 system_pods.go:61] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:05:17.085597  303063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:05:17.085608  303063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:05:17.085615  303063 system_pods.go:61] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:05:17.085624  303063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:05:17.085631  303063 system_pods.go:61] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:05:17.085638  303063 system_pods.go:61] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:05:17.085646  303063 system_pods.go:74] duration metric: took 9.051189ms to wait for pod list to return data ...
	I0920 19:05:17.085657  303063 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:05:17.089161  303063 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:05:17.089190  303063 node_conditions.go:123] node cpu capacity is 2
	I0920 19:05:17.089201  303063 node_conditions.go:105] duration metric: took 3.534622ms to run NodePressure ...
	I0920 19:05:17.089218  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:17.442957  303063 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:05:17.447222  303063 kubeadm.go:739] kubelet initialised
	I0920 19:05:17.447247  303063 kubeadm.go:740] duration metric: took 4.255349ms waiting for restarted kubelet to initialise ...
	I0920 19:05:17.447255  303063 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:17.451839  303063 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.457216  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.457240  303063 pod_ready.go:82] duration metric: took 5.361636ms for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.457250  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.457256  303063 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.462245  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.462273  303063 pod_ready.go:82] duration metric: took 5.009342ms for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.462313  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.462326  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.468060  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.468087  303063 pod_ready.go:82] duration metric: took 5.75409ms for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.468099  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.468105  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.479703  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.479727  303063 pod_ready.go:82] duration metric: took 11.614638ms for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.479739  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.479750  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.879555  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-proxy-zp8l5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.879582  303063 pod_ready.go:82] duration metric: took 399.824208ms for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.879592  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-proxy-zp8l5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.879599  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:18.281551  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.281585  303063 pod_ready.go:82] duration metric: took 401.976884ms for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:18.281601  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.281611  303063 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:18.680674  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.680711  303063 pod_ready.go:82] duration metric: took 399.091849ms for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:18.680723  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.680730  303063 pod_ready.go:39] duration metric: took 1.233465539s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:18.680747  303063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:05:18.692948  303063 ops.go:34] apiserver oom_adj: -16
	I0920 19:05:18.692970  303063 kubeadm.go:597] duration metric: took 9.168545987s to restartPrimaryControlPlane
	I0920 19:05:18.692981  303063 kubeadm.go:394] duration metric: took 9.218309896s to StartCluster
	I0920 19:05:18.692999  303063 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:18.693078  303063 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:05:18.694921  303063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:18.695293  303063 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:05:18.696157  303063 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:18.696187  303063 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:05:18.696357  303063 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696377  303063 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.696387  303063 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:05:18.696419  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.696449  303063 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696495  303063 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-612312"
	I0920 19:05:18.696506  303063 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696588  303063 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.696610  303063 addons.go:243] addon metrics-server should already be in state true
	I0920 19:05:18.696709  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.697239  303063 out.go:177] * Verifying Kubernetes components...
	I0920 19:05:18.697334  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697386  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.697409  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697409  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697442  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.697531  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.698927  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:18.713346  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I0920 19:05:18.713346  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35127
	I0920 19:05:18.713967  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.714000  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.714472  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.714491  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.714572  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.714588  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.714961  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.714965  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.715163  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.715842  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.715893  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.717732  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I0920 19:05:18.718289  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.718553  303063 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.718575  303063 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:05:18.718604  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.718827  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.718852  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.718926  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.718956  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.719243  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.719782  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.719826  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.733219  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0920 19:05:18.733789  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.734403  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.734422  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.734463  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41091
	I0920 19:05:18.734905  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.734993  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.735207  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.735363  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.735394  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.735703  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.736264  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.736321  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.737489  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.739977  303063 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:05:18.740477  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39547
	I0920 19:05:18.741217  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.741752  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:05:18.741770  303063 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:05:18.741791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.741854  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.741875  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.742351  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.742547  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.744800  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.746006  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.746416  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.746442  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.746695  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.746961  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.746974  303063 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:15.815519  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:15.816035  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:15.816065  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:15.815974  304380 retry.go:31] will retry after 2.62788631s: waiting for machine to come up
	I0920 19:05:18.446768  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:18.447219  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:18.447240  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:18.447166  304380 retry.go:31] will retry after 4.025841071s: waiting for machine to come up
	I0920 19:05:16.784503  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:18.785829  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:18.747159  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.747332  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.748881  303063 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:05:18.748901  303063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:05:18.748932  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.752335  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.752787  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.752812  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.753180  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.753340  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.753491  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.753628  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.755106  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0920 19:05:18.755543  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.756159  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.756182  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.756521  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.756710  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.758400  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.758674  303063 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:05:18.758690  303063 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:05:18.758707  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.762208  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.762748  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.762776  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.762950  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.763235  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.763518  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.763678  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.900876  303063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:18.919923  303063 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-612312" to be "Ready" ...
	I0920 19:05:18.993779  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:05:18.993814  303063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:05:19.001703  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:05:19.019424  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:05:19.054174  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:05:19.054202  303063 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:05:19.123651  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:05:19.123682  303063 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:05:19.186745  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:05:19.369866  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.369898  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.370210  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.370229  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:19.370246  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.370270  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.370552  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.370593  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:19.370625  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:19.380105  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.380140  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.380456  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.380472  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.145346  303063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.12587258s)
	I0920 19:05:20.145412  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.145427  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.145769  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:20.145834  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.145846  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.145866  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.145877  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.146126  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.146144  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152067  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.152084  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.152361  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.152379  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152388  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.152395  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.152625  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.152662  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:20.152711  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152729  303063 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-612312"
	I0920 19:05:20.154940  303063 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 19:05:20.156326  303063 addons.go:510] duration metric: took 1.460148296s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 19:05:20.923687  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:22.924271  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:23.791151  302538 start.go:364] duration metric: took 54.811585482s to acquireMachinesLock for "no-preload-037711"
	I0920 19:05:23.791208  302538 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:05:23.791219  302538 fix.go:54] fixHost starting: 
	I0920 19:05:23.791657  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:23.791696  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:23.809350  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0920 19:05:23.809873  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:23.810520  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:05:23.810555  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:23.810893  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:23.811118  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:23.811286  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:05:23.812885  302538 fix.go:112] recreateIfNeeded on no-preload-037711: state=Stopped err=<nil>
	I0920 19:05:23.812914  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	W0920 19:05:23.813135  302538 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:05:23.815287  302538 out.go:177] * Restarting existing kvm2 VM for "no-preload-037711" ...
	I0920 19:05:22.477850  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.478419  303486 main.go:141] libmachine: (old-k8s-version-425599) Found IP for machine: 192.168.39.53
	I0920 19:05:22.478454  303486 main.go:141] libmachine: (old-k8s-version-425599) Reserving static IP address...
	I0920 19:05:22.478473  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has current primary IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.478983  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "old-k8s-version-425599", mac: "52:54:00:d2:a5:70", ip: "192.168.39.53"} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.479021  303486 main.go:141] libmachine: (old-k8s-version-425599) Reserved static IP address: 192.168.39.53
	I0920 19:05:22.479040  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | skip adding static IP to network mk-old-k8s-version-425599 - found existing host DHCP lease matching {name: "old-k8s-version-425599", mac: "52:54:00:d2:a5:70", ip: "192.168.39.53"}
	I0920 19:05:22.479055  303486 main.go:141] libmachine: (old-k8s-version-425599) Waiting for SSH to be available...
	I0920 19:05:22.479067  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Getting to WaitForSSH function...
	I0920 19:05:22.481118  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.481359  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.481382  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.481556  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH client type: external
	I0920 19:05:22.481570  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa (-rw-------)
	I0920 19:05:22.481600  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:22.481612  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | About to run SSH command:
	I0920 19:05:22.481627  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | exit 0
	I0920 19:05:22.606383  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:22.606783  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetConfigRaw
	I0920 19:05:22.607408  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:22.610155  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.610474  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.610506  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.610784  303486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 19:05:22.611075  303486 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:22.611103  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:22.611332  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.613838  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.614250  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.614283  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.614395  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.614609  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.614776  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.614950  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.615136  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.615331  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.615344  303486 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:22.718330  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:22.718363  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.718651  303486 buildroot.go:166] provisioning hostname "old-k8s-version-425599"
	I0920 19:05:22.718697  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.718913  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.722027  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.722334  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.722370  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.722559  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.722738  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.722909  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.723086  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.723261  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.723473  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.723491  303486 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-425599 && echo "old-k8s-version-425599" | sudo tee /etc/hostname
	I0920 19:05:22.841563  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-425599
	
	I0920 19:05:22.841592  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.844327  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.844716  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.844748  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.844970  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.845154  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.845306  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.845413  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.845570  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.845793  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.845818  303486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-425599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-425599/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-425599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:22.959542  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:22.959572  303486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:22.959615  303486 buildroot.go:174] setting up certificates
	I0920 19:05:22.959625  303486 provision.go:84] configureAuth start
	I0920 19:05:22.959635  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.959972  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:22.962506  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.962845  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.962883  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.963020  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.965352  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.965734  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.965755  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.965936  303486 provision.go:143] copyHostCerts
	I0920 19:05:22.965999  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:22.966018  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:22.966073  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:22.966165  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:22.966173  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:22.966193  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:22.966250  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:22.966257  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:22.966274  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:22.966368  303486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-425599 san=[127.0.0.1 192.168.39.53 localhost minikube old-k8s-version-425599]
	I0920 19:05:23.156245  303486 provision.go:177] copyRemoteCerts
	I0920 19:05:23.156322  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:23.156356  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.159694  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.160062  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.160105  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.160283  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.160467  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.160633  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.160755  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.244439  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:23.271796  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 19:05:23.298124  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:23.323466  303486 provision.go:87] duration metric: took 363.82725ms to configureAuth
	I0920 19:05:23.323496  303486 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:23.323711  303486 config.go:182] Loaded profile config "old-k8s-version-425599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:05:23.323805  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.326985  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.327336  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.327363  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.327573  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.327788  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.328003  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.328161  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.328315  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:23.328492  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:23.328506  303486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:23.559721  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:23.559755  303486 machine.go:96] duration metric: took 948.663189ms to provisionDockerMachine
	I0920 19:05:23.559770  303486 start.go:293] postStartSetup for "old-k8s-version-425599" (driver="kvm2")
	I0920 19:05:23.559781  303486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:23.559812  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.560186  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:23.560225  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.563146  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.563462  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.563491  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.563786  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.564018  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.564214  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.564365  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.645013  303486 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:23.649198  303486 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:23.649230  303486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:23.649300  303486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:23.649416  303486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:23.649544  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:23.659351  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:23.683405  303486 start.go:296] duration metric: took 123.617289ms for postStartSetup
	I0920 19:05:23.683466  303486 fix.go:56] duration metric: took 20.008417985s for fixHost
	I0920 19:05:23.683495  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.686540  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.686962  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.686988  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.687209  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.687445  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.687624  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.687803  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.688001  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:23.688188  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:23.688206  303486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:23.790992  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859123.767729644
	
	I0920 19:05:23.791024  303486 fix.go:216] guest clock: 1726859123.767729644
	I0920 19:05:23.791035  303486 fix.go:229] Guest: 2024-09-20 19:05:23.767729644 +0000 UTC Remote: 2024-09-20 19:05:23.683472425 +0000 UTC m=+234.770765310 (delta=84.257219ms)
	I0920 19:05:23.791061  303486 fix.go:200] guest clock delta is within tolerance: 84.257219ms
	I0920 19:05:23.791068  303486 start.go:83] releasing machines lock for "old-k8s-version-425599", held for 20.116056408s
	I0920 19:05:23.791101  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.791432  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:23.794540  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.795015  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.795048  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.795226  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.795779  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.795992  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.796129  303486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:23.796180  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.796241  303486 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:23.796265  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.799032  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799374  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.799399  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799418  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799540  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.799743  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.799874  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.799890  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.799906  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.800084  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.800077  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.800198  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.800365  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.800514  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.924885  303486 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:23.932642  303486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:21.284671  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:23.284813  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:24.083860  303486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:24.090360  303486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:24.090444  303486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:24.112281  303486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:24.112310  303486 start.go:495] detecting cgroup driver to use...
	I0920 19:05:24.112383  303486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:24.136600  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:24.154552  303486 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:24.154631  303486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:24.170600  303486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:24.186071  303486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:24.319752  303486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:24.498299  303486 docker.go:233] disabling docker service ...
	I0920 19:05:24.498385  303486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:24.515762  303486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:24.533482  303486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:24.687481  303486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:24.820191  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:24.835255  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:24.856179  303486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 19:05:24.856253  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.868991  303486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:24.869080  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.884074  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.898732  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.911016  303486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:24.922757  303486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:24.937719  303486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:24.937828  303486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:24.955496  303486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:24.966347  303486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:25.114758  303486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:25.226807  303486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:25.226984  303486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:25.234576  303486 start.go:563] Will wait 60s for crictl version
	I0920 19:05:25.234664  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:25.238739  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:25.282242  303486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:25.282344  303486 ssh_runner.go:195] Run: crio --version
	I0920 19:05:25.317733  303486 ssh_runner.go:195] Run: crio --version
	I0920 19:05:25.353767  303486 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 19:05:23.816707  302538 main.go:141] libmachine: (no-preload-037711) Calling .Start
	I0920 19:05:23.817003  302538 main.go:141] libmachine: (no-preload-037711) Ensuring networks are active...
	I0920 19:05:23.817953  302538 main.go:141] libmachine: (no-preload-037711) Ensuring network default is active
	I0920 19:05:23.818345  302538 main.go:141] libmachine: (no-preload-037711) Ensuring network mk-no-preload-037711 is active
	I0920 19:05:23.818824  302538 main.go:141] libmachine: (no-preload-037711) Getting domain xml...
	I0920 19:05:23.819705  302538 main.go:141] libmachine: (no-preload-037711) Creating domain...
	I0920 19:05:25.216298  302538 main.go:141] libmachine: (no-preload-037711) Waiting to get IP...
	I0920 19:05:25.217452  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.218073  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.218138  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.218047  304582 retry.go:31] will retry after 256.299732ms: waiting for machine to come up
	I0920 19:05:25.475745  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.476451  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.476485  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.476388  304582 retry.go:31] will retry after 298.732749ms: waiting for machine to come up
	I0920 19:05:25.777093  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.777731  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.777755  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.777701  304582 retry.go:31] will retry after 360.011383ms: waiting for machine to come up
	I0920 19:05:26.139480  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:26.140100  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:26.140132  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:26.140049  304582 retry.go:31] will retry after 593.756705ms: waiting for machine to come up
	I0920 19:05:24.924455  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:26.425132  303063 node_ready.go:49] node "default-k8s-diff-port-612312" has status "Ready":"True"
	I0920 19:05:26.425165  303063 node_ready.go:38] duration metric: took 7.505210484s for node "default-k8s-diff-port-612312" to be "Ready" ...
	I0920 19:05:26.425181  303063 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:26.433394  303063 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:26.440462  303063 pod_ready.go:93] pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:26.440497  303063 pod_ready.go:82] duration metric: took 7.072952ms for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:26.440513  303063 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:25.354959  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:25.358179  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:25.358467  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:25.358495  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:25.358739  303486 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:25.362714  303486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:25.375880  303486 kubeadm.go:883] updating cluster {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:25.376024  303486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:05:25.376074  303486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:25.420224  303486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:05:25.420307  303486 ssh_runner.go:195] Run: which lz4
	I0920 19:05:25.424775  303486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:05:25.430102  303486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:05:25.430151  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 19:05:27.014068  303486 crio.go:462] duration metric: took 1.589333502s to copy over tarball
	I0920 19:05:27.014160  303486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:05:25.786282  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:27.788058  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:26.735924  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:26.736558  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:26.736582  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:26.736458  304582 retry.go:31] will retry after 712.118443ms: waiting for machine to come up
	I0920 19:05:27.450059  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:27.450696  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:27.450719  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:27.450592  304582 retry.go:31] will retry after 588.649809ms: waiting for machine to come up
	I0920 19:05:28.041216  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:28.041760  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:28.041791  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:28.041691  304582 retry.go:31] will retry after 869.42079ms: waiting for machine to come up
	I0920 19:05:28.912809  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:28.913240  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:28.913265  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:28.913174  304582 retry.go:31] will retry after 1.410011475s: waiting for machine to come up
	I0920 19:05:30.324367  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:30.324952  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:30.324980  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:30.324875  304582 retry.go:31] will retry after 1.398358739s: waiting for machine to come up
	I0920 19:05:28.454512  303063 pod_ready.go:103] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:31.546557  303063 pod_ready.go:103] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:32.072690  303063 pod_ready.go:93] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.072719  303063 pod_ready.go:82] duration metric: took 5.632196538s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.072734  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.081029  303063 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.081062  303063 pod_ready.go:82] duration metric: took 8.319382ms for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.081076  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.087314  303063 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.087338  303063 pod_ready.go:82] duration metric: took 6.253184ms for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.087351  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.093286  303063 pod_ready.go:93] pod "kube-proxy-zp8l5" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.093313  303063 pod_ready.go:82] duration metric: took 5.953425ms for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.093326  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.098529  303063 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.098553  303063 pod_ready.go:82] duration metric: took 5.218413ms for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.098565  303063 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:30.096727  303486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.082523066s)
	I0920 19:05:30.096778  303486 crio.go:469] duration metric: took 3.082671461s to extract the tarball
	I0920 19:05:30.096789  303486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:05:30.148059  303486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:30.184547  303486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:05:30.184578  303486 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 19:05:30.184672  303486 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:30.184686  303486 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.184711  303486 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.184730  303486 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.184732  303486 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.184686  303486 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 19:05:30.184693  303486 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.184792  303486 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.186558  303486 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:30.186609  303486 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 19:05:30.186607  303486 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.186616  303486 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.186688  303486 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.186698  303486 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.186701  303486 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.186565  303486 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.425283  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 19:05:30.469378  303486 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 19:05:30.469448  303486 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 19:05:30.469514  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.475453  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.493250  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.505003  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.513203  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.514365  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.521729  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.533265  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.580710  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.613984  303486 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 19:05:30.614033  303486 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.614085  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.653094  303486 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 19:05:30.653150  303486 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.653205  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675697  303486 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 19:05:30.675730  303486 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 19:05:30.675752  303486 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.675762  303486 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.675805  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675820  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675805  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.709199  303486 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 19:05:30.709261  303486 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.709310  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.720146  303486 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 19:05:30.720198  303486 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.720233  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.720313  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.720241  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.720374  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.720247  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.737444  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.737487  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 19:05:30.843272  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.843362  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.843366  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.860414  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.860462  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.860430  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.954641  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.982227  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.982263  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:31.041996  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:31.042032  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:31.042650  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:31.042722  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:31.070786  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 19:05:31.120407  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 19:05:31.135751  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 19:05:31.163591  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 19:05:31.164483  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 19:05:31.164587  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 19:05:31.345957  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:31.486337  303486 cache_images.go:92] duration metric: took 1.301737533s to LoadCachedImages
	W0920 19:05:31.486434  303486 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 19:05:31.486452  303486 kubeadm.go:934] updating node { 192.168.39.53 8443 v1.20.0 crio true true} ...
	I0920 19:05:31.486576  303486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-425599 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:05:31.486661  303486 ssh_runner.go:195] Run: crio config
	I0920 19:05:31.544181  303486 cni.go:84] Creating CNI manager for ""
	I0920 19:05:31.544215  303486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:31.544229  303486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:05:31.544257  303486 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-425599 NodeName:old-k8s-version-425599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 19:05:31.544465  303486 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-425599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:05:31.544556  303486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 19:05:31.559445  303486 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:05:31.559542  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:05:31.570446  303486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0920 19:05:31.588741  303486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:05:31.606454  303486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0920 19:05:31.624483  303486 ssh_runner.go:195] Run: grep 192.168.39.53	control-plane.minikube.internal$ /etc/hosts
	I0920 19:05:31.628285  303486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:31.641039  303486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:31.771690  303486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:31.789746  303486 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599 for IP: 192.168.39.53
	I0920 19:05:31.789775  303486 certs.go:194] generating shared ca certs ...
	I0920 19:05:31.789806  303486 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:31.790074  303486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:05:31.790150  303486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:05:31.790165  303486 certs.go:256] generating profile certs ...
	I0920 19:05:31.798117  303486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/client.key
	I0920 19:05:31.798270  303486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key.e78cb154
	I0920 19:05:31.798333  303486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key
	I0920 19:05:31.798499  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:05:31.798543  303486 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:05:31.798557  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:05:31.798608  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:05:31.798659  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:05:31.798692  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:05:31.798748  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:31.799624  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:05:31.843298  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:05:31.877299  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:05:31.909777  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:05:31.947787  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 19:05:31.991175  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:05:32.019393  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:05:32.048475  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:05:32.084354  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:05:32.112161  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:05:32.138991  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:05:32.167653  303486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:05:32.185485  303486 ssh_runner.go:195] Run: openssl version
	I0920 19:05:32.192030  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:05:32.203761  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.209550  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.209650  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.216277  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:05:32.228192  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:05:32.239984  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.244782  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.244848  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.250865  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:05:32.262035  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:05:32.273790  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.279335  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.279414  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.286501  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:05:32.298052  303486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:05:32.303064  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:05:32.309973  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:05:32.316704  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:05:32.323166  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:05:32.330126  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:05:32.336554  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:05:32.343303  303486 kubeadm.go:392] StartCluster: {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:05:32.343413  303486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:05:32.343473  303486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:32.387562  303486 cri.go:89] found id: ""
	I0920 19:05:32.387653  303486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:32.398143  303486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:05:32.398167  303486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:05:32.398222  303486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:05:32.407776  303486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:05:32.409205  303486 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-425599" does not appear in /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:05:32.410267  303486 kubeconfig.go:62] /home/jenkins/minikube-integration/19679-237658/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-425599" cluster setting kubeconfig missing "old-k8s-version-425599" context setting]
	I0920 19:05:32.411776  303486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:32.457074  303486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:05:32.468055  303486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.53
	I0920 19:05:32.468113  303486 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:05:32.468132  303486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:05:32.468211  303486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:32.505151  303486 cri.go:89] found id: ""
	I0920 19:05:32.505241  303486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:05:32.521391  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:05:32.531705  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:05:32.531728  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:05:32.531774  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:05:32.541137  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:05:32.541219  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:05:32.550684  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:05:32.560262  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:05:32.560352  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:05:32.569735  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:05:32.579126  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:05:32.579199  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:05:32.589508  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:05:32.600985  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:05:32.601100  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:05:32.611511  303486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:05:32.622346  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:32.755562  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:33.793472  303486 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.037864747s)
	I0920 19:05:33.793513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:30.283826  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:32.285077  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:31.725721  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:31.726171  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:31.726198  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:31.726127  304582 retry.go:31] will retry after 2.32427136s: waiting for machine to come up
	I0920 19:05:34.052412  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:34.053005  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:34.053043  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:34.052923  304582 retry.go:31] will retry after 2.159036217s: waiting for machine to come up
	I0920 19:05:36.215059  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:36.215561  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:36.215585  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:36.215501  304582 retry.go:31] will retry after 3.424610182s: waiting for machine to come up
	I0920 19:05:34.105780  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:36.106491  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:34.021260  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:34.142176  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:34.235507  303486 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:05:34.235618  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:34.736586  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:35.236065  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:35.735783  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:36.236406  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:36.736243  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:37.235994  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:37.736168  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:38.236559  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:38.736139  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:34.784743  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:37.282598  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.284890  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.642163  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:39.642600  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:39.642642  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:39.642541  304582 retry.go:31] will retry after 3.073679854s: waiting for machine to come up
	I0920 19:05:38.116192  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:40.605958  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.236010  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:39.735723  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:40.236003  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:40.735741  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.235689  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.736411  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:42.236028  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:42.735814  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:43.236391  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:43.736174  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.783707  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:43.784197  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:42.719195  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.719748  302538 main.go:141] libmachine: (no-preload-037711) Found IP for machine: 192.168.61.136
	I0920 19:05:42.719775  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has current primary IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.719780  302538 main.go:141] libmachine: (no-preload-037711) Reserving static IP address...
	I0920 19:05:42.720201  302538 main.go:141] libmachine: (no-preload-037711) Reserved static IP address: 192.168.61.136
	I0920 19:05:42.720220  302538 main.go:141] libmachine: (no-preload-037711) Waiting for SSH to be available...
	I0920 19:05:42.720239  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "no-preload-037711", mac: "52:54:00:b0:ac:14", ip: "192.168.61.136"} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.720268  302538 main.go:141] libmachine: (no-preload-037711) DBG | skip adding static IP to network mk-no-preload-037711 - found existing host DHCP lease matching {name: "no-preload-037711", mac: "52:54:00:b0:ac:14", ip: "192.168.61.136"}
	I0920 19:05:42.720280  302538 main.go:141] libmachine: (no-preload-037711) DBG | Getting to WaitForSSH function...
	I0920 19:05:42.722402  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.722661  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.722686  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.722864  302538 main.go:141] libmachine: (no-preload-037711) DBG | Using SSH client type: external
	I0920 19:05:42.722877  302538 main.go:141] libmachine: (no-preload-037711) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa (-rw-------)
	I0920 19:05:42.722939  302538 main.go:141] libmachine: (no-preload-037711) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:42.722962  302538 main.go:141] libmachine: (no-preload-037711) DBG | About to run SSH command:
	I0920 19:05:42.722979  302538 main.go:141] libmachine: (no-preload-037711) DBG | exit 0
	I0920 19:05:42.850057  302538 main.go:141] libmachine: (no-preload-037711) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:42.850451  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetConfigRaw
	I0920 19:05:42.851176  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:42.853807  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.854268  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.854290  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.854558  302538 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/config.json ...
	I0920 19:05:42.854764  302538 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:42.854782  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:42.854999  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:42.857347  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.857683  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.857712  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.857892  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:42.858073  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.858242  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.858385  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:42.858569  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:42.858755  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:42.858766  302538 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:42.962098  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:42.962137  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:42.962455  302538 buildroot.go:166] provisioning hostname "no-preload-037711"
	I0920 19:05:42.962488  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:42.962696  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:42.965410  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.965793  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.965822  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.965954  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:42.966128  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.966285  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.966442  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:42.966650  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:42.966822  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:42.966847  302538 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-037711 && echo "no-preload-037711" | sudo tee /etc/hostname
	I0920 19:05:43.089291  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-037711
	
	I0920 19:05:43.089338  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.092213  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.092658  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.092689  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.092828  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.093031  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.093188  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.093305  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.093478  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.093692  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.093719  302538 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-037711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-037711/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-037711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:43.210625  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:43.210660  302538 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:43.210720  302538 buildroot.go:174] setting up certificates
	I0920 19:05:43.210740  302538 provision.go:84] configureAuth start
	I0920 19:05:43.210758  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:43.211093  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:43.213829  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.214346  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.214379  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.214542  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.216979  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.217294  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.217319  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.217461  302538 provision.go:143] copyHostCerts
	I0920 19:05:43.217526  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:43.217546  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:43.217610  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:43.217708  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:43.217720  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:43.217750  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:43.217885  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:43.217899  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:43.217947  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:43.218008  302538 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.no-preload-037711 san=[127.0.0.1 192.168.61.136 localhost minikube no-preload-037711]
	I0920 19:05:43.395507  302538 provision.go:177] copyRemoteCerts
	I0920 19:05:43.395576  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:43.395607  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.398288  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.398663  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.398694  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.398899  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.399087  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.399205  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.399324  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:43.488543  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 19:05:43.514793  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:43.537520  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:43.561983  302538 provision.go:87] duration metric: took 351.22541ms to configureAuth
	I0920 19:05:43.562021  302538 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:43.562213  302538 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:43.562292  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.565776  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.566235  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.566270  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.566486  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.566706  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.566895  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.567043  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.567251  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.567439  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.567454  302538 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:43.797110  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:43.797142  302538 machine.go:96] duration metric: took 942.364782ms to provisionDockerMachine
	I0920 19:05:43.797157  302538 start.go:293] postStartSetup for "no-preload-037711" (driver="kvm2")
	I0920 19:05:43.797171  302538 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:43.797193  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:43.797516  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:43.797546  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.800148  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.800532  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.800559  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.800794  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.800993  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.801158  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.801255  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:43.885788  302538 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:43.890070  302538 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:43.890108  302538 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:43.890198  302538 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:43.890293  302538 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:43.890405  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:43.899679  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:43.924928  302538 start.go:296] duration metric: took 127.752462ms for postStartSetup
	I0920 19:05:43.924973  302538 fix.go:56] duration metric: took 20.133755115s for fixHost
	I0920 19:05:43.924996  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.927678  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.928059  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.928099  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.928277  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.928517  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.928685  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.928815  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.928979  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.929190  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.929204  302538 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:44.042745  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859144.016675004
	
	I0920 19:05:44.042769  302538 fix.go:216] guest clock: 1726859144.016675004
	I0920 19:05:44.042776  302538 fix.go:229] Guest: 2024-09-20 19:05:44.016675004 +0000 UTC Remote: 2024-09-20 19:05:43.924977449 +0000 UTC m=+357.534412233 (delta=91.697555ms)
	I0920 19:05:44.042804  302538 fix.go:200] guest clock delta is within tolerance: 91.697555ms
	I0920 19:05:44.042819  302538 start.go:83] releasing machines lock for "no-preload-037711", held for 20.251627041s
	I0920 19:05:44.042842  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.043134  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:44.046071  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.046412  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.046440  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.046613  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047113  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047278  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047366  302538 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:44.047428  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:44.047520  302538 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:44.047548  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:44.050275  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050358  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050849  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.050872  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.050892  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050915  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.051095  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:44.051259  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:44.051259  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:44.051496  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:44.051637  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:44.051655  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:44.051789  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:44.051953  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:44.134420  302538 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:44.175303  302538 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:44.319129  302538 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:44.325894  302538 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:44.325975  302538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:44.341779  302538 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:44.341809  302538 start.go:495] detecting cgroup driver to use...
	I0920 19:05:44.341899  302538 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:44.358211  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:44.373240  302538 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:44.373327  302538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:44.387429  302538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:44.401684  302538 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:44.521292  302538 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:44.668050  302538 docker.go:233] disabling docker service ...
	I0920 19:05:44.668124  302538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:44.683196  302538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:44.696604  302538 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:44.843581  302538 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:44.959377  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:44.973472  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:44.991282  302538 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:05:44.991344  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.001696  302538 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:45.001776  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.012684  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.023288  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.034330  302538 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:45.045773  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.056332  302538 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.074730  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.085656  302538 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:45.096371  302538 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:45.096447  302538 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:45.112094  302538 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:45.123050  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:45.236136  302538 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:45.325978  302538 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:45.326065  302538 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:45.330452  302538 start.go:563] Will wait 60s for crictl version
	I0920 19:05:45.330527  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.334010  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:45.373622  302538 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:45.373736  302538 ssh_runner.go:195] Run: crio --version
	I0920 19:05:45.401279  302538 ssh_runner.go:195] Run: crio --version
	I0920 19:05:45.430445  302538 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:05:45.431717  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:45.434768  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:45.435094  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:45.435121  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:45.435335  302538 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:45.439275  302538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:45.451300  302538 kubeadm.go:883] updating cluster {Name:no-preload-037711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:45.451461  302538 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:05:45.451502  302538 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:45.485045  302538 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:05:45.485073  302538 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 19:05:45.485130  302538 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:45.485150  302538 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.485168  302538 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.485182  302538 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.485231  302538 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.485171  302538 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.485305  302538 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 19:05:45.485450  302538 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.486694  302538 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.486700  302538 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.486808  302538 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.486808  302538 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 19:05:45.486829  302538 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.486894  302538 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:45.486829  302538 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.487055  302538 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.708911  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 19:05:45.773014  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.815176  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.818274  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.818298  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.829644  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.850791  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.862553  302538 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 19:05:45.862616  302538 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.862680  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.907516  302538 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 19:05:45.907573  302538 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.907629  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.938640  302538 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 19:05:45.938715  302538 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.938755  302538 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 19:05:45.938799  302538 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.938845  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.938770  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.947658  302538 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 19:05:45.947706  302538 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.947757  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.965105  302538 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 19:05:45.965161  302538 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.965166  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.965191  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.965248  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.965282  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.965344  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.965350  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.044513  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.044640  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:46.077894  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:46.080113  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:46.080170  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:46.080239  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.155137  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.155188  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:46.208431  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:46.208477  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:46.208521  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.208565  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:46.290657  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.290694  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 19:05:46.290794  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.325206  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 19:05:46.325353  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:46.353181  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 19:05:46.353289  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 19:05:46.353307  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 19:05:46.353312  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:46.353331  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 19:05:46.353383  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:46.353418  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:46.353384  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.353512  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.379873  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 19:05:46.379934  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 19:05:46.379979  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 19:05:46.380024  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 19:05:46.379981  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 19:05:46.380134  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:43.105005  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:45.105781  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:47.604822  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:44.235886  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:44.736349  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.235783  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.736619  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:46.236082  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:46.736609  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:47.236078  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:47.736130  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:48.236218  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:48.735858  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.784555  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:47.785125  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:46.622278  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:48.339532  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.985991382s)
	I0920 19:05:48.339568  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 19:05:48.339594  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:48.339653  302538 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.959488823s)
	I0920 19:05:48.339685  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 19:05:48.339665  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:48.339742  302538 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.717432253s)
	I0920 19:05:48.339787  302538 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 19:05:48.339815  302538 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:48.339842  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:48.343725  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:50.823508  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.483779728s)
	I0920 19:05:50.823559  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.479795238s)
	I0920 19:05:50.823593  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 19:05:50.823637  302538 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:50.823649  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:50.823692  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:49.607326  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:51.609055  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:49.236645  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:49.736183  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.236642  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.735713  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:51.235862  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:51.736479  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:52.235726  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:52.735939  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:53.235759  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:53.736290  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.284090  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:52.284996  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:54.127303  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.303601736s)
	I0920 19:05:54.127415  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:54.127327  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.303608969s)
	I0920 19:05:54.127455  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 19:05:54.127488  302538 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:54.127530  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:56.202021  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.074563861s)
	I0920 19:05:56.202050  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.074501802s)
	I0920 19:05:56.202076  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 19:05:56.202095  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 19:05:56.202118  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:56.202184  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:56.202202  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:05:56.207141  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 19:05:54.104909  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:56.105373  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:54.235840  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:54.735817  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:55.235812  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:55.736410  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:56.236203  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:56.735713  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:57.235777  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:57.735835  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:58.236448  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:58.736010  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:54.783661  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:56.784770  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:58.785122  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:58.166303  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.964088667s)
	I0920 19:05:58.166340  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 19:05:58.166369  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:58.166424  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:59.625258  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.458808535s)
	I0920 19:05:59.625294  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 19:05:59.625318  302538 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:05:59.625361  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:06:00.572722  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 19:06:00.572768  302538 cache_images.go:123] Successfully loaded all cached images
	I0920 19:06:00.572774  302538 cache_images.go:92] duration metric: took 15.087689513s to LoadCachedImages
	I0920 19:06:00.572788  302538 kubeadm.go:934] updating node { 192.168.61.136 8443 v1.31.1 crio true true} ...
	I0920 19:06:00.572917  302538 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-037711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:06:00.572994  302538 ssh_runner.go:195] Run: crio config
	I0920 19:06:00.619832  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:06:00.619861  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:06:00.619875  302538 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:06:00.619910  302538 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-037711 NodeName:no-preload-037711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:06:00.620110  302538 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-037711"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:06:00.620181  302538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:06:00.630434  302538 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:06:00.630513  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:06:00.639447  302538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 19:06:00.656195  302538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:06:00.675718  302538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0920 19:06:00.709191  302538 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0920 19:06:00.713271  302538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:06:00.726826  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:06:00.850927  302538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:06:00.869014  302538 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711 for IP: 192.168.61.136
	I0920 19:06:00.869044  302538 certs.go:194] generating shared ca certs ...
	I0920 19:06:00.869109  302538 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:06:00.869331  302538 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:06:00.869393  302538 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:06:00.869405  302538 certs.go:256] generating profile certs ...
	I0920 19:06:00.869507  302538 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/client.key
	I0920 19:06:00.869589  302538 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.key.b5da98fb
	I0920 19:06:00.869654  302538 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.key
	I0920 19:06:00.869831  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:06:00.869877  302538 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:06:00.869890  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:06:00.869947  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:06:00.869981  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:06:00.870010  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:06:00.870068  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:06:00.870802  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:06:00.922699  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:06:00.953401  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:06:00.996889  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:06:01.024682  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 19:06:01.050412  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:06:01.081212  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:06:01.108337  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:06:01.133628  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:06:01.158805  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:06:01.186888  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:06:01.211771  302538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:06:01.229448  302538 ssh_runner.go:195] Run: openssl version
	I0920 19:06:01.235289  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:06:01.246775  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.251410  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.251472  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.257271  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:06:01.268229  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:06:01.280431  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.285643  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.285736  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.291772  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:06:01.302858  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:06:01.314034  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.319160  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.319235  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.325450  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:06:01.336803  302538 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:06:01.341439  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:06:01.347592  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:06:01.354109  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:06:01.360513  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:06:01.366749  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:06:01.372898  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:06:01.379101  302538 kubeadm.go:392] StartCluster: {Name:no-preload-037711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:06:01.379228  302538 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:06:01.379280  302538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:06:01.416896  302538 cri.go:89] found id: ""
	I0920 19:06:01.416972  302538 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:58.606203  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:00.606802  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:59.236283  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:59.736440  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:00.236142  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:00.735772  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.236360  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.736388  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:02.236462  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:02.736742  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.236457  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.736705  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.284596  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:03.784495  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:01.428611  302538 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:06:01.428636  302538 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:06:01.428685  302538 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:06:01.439392  302538 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:06:01.440512  302538 kubeconfig.go:125] found "no-preload-037711" server: "https://192.168.61.136:8443"
	I0920 19:06:01.442938  302538 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:06:01.452938  302538 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.136
	I0920 19:06:01.452982  302538 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:06:01.452999  302538 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:06:01.453062  302538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:06:01.487878  302538 cri.go:89] found id: ""
	I0920 19:06:01.487967  302538 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:06:01.506032  302538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:06:01.516536  302538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:06:01.516562  302538 kubeadm.go:157] found existing configuration files:
	
	I0920 19:06:01.516609  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:06:01.526718  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:06:01.526790  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:06:01.536809  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:06:01.546172  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:06:01.546243  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:06:01.556211  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:06:01.565796  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:06:01.565869  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:06:01.577089  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:06:01.587862  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:06:01.587985  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:06:01.598666  302538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:06:01.610018  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:01.740046  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.566817  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.784258  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.848752  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.933469  302538 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:06:02.933579  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.434385  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.933975  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.962422  302538 api_server.go:72] duration metric: took 1.028951755s to wait for apiserver process to appear ...
	I0920 19:06:03.962453  302538 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:06:03.962485  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:03.963119  302538 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": dial tcp 192.168.61.136:8443: connect: connection refused
	I0920 19:06:04.462843  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.443140  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:06:06.443178  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:06:06.443196  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.485554  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:06:06.485597  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:06:06.485614  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.566023  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:06.566068  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:06.963116  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.972764  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:06.972804  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:07.463432  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:07.470963  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:07.471000  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:07.962553  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:07.967724  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0920 19:06:07.975215  302538 api_server.go:141] control plane version: v1.31.1
	I0920 19:06:07.975248  302538 api_server.go:131] duration metric: took 4.01278814s to wait for apiserver health ...
	I0920 19:06:07.975258  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:06:07.975267  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:06:07.977455  302538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:06:03.106079  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:05.609475  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:04.236005  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:04.735854  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:05.236716  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:05.736668  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.235839  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.736412  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:07.236224  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:07.735830  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:08.235800  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:08.736645  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.284930  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:08.784854  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:07.979099  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:06:07.991210  302538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:06:08.016110  302538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:06:08.031124  302538 system_pods.go:59] 8 kube-system pods found
	I0920 19:06:08.031177  302538 system_pods.go:61] "coredns-7c65d6cfc9-8gmsq" [91d89ad2-f899-464c-b351-a0773c16223b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:06:08.031191  302538 system_pods.go:61] "etcd-no-preload-037711" [5b353ad3-0389-4e3d-b5c3-2f2bc65db200] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:06:08.031203  302538 system_pods.go:61] "kube-apiserver-no-preload-037711" [b19002c7-f891-4bc1-a2f0-0f6beebb3987] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:06:08.031247  302538 system_pods.go:61] "kube-controller-manager-no-preload-037711" [a5b1951d-7189-4ee3-bc28-bed058048ebb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:06:08.031262  302538 system_pods.go:61] "kube-proxy-zzmkv" [c8f4695b-eefd-407a-9b7c-d5078632d120] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:06:08.031270  302538 system_pods.go:61] "kube-scheduler-no-preload-037711" [b44824ba-52ad-4e86-9408-118f0e1852d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:06:08.031280  302538 system_pods.go:61] "metrics-server-6867b74b74-7xpgm" [f6280d56-5be4-475f-91da-2862e992868f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:06:08.031290  302538 system_pods.go:61] "storage-provisioner" [d1efb64f-d2a9-4bb4-9bc3-c643c415fcf2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:06:08.031300  302538 system_pods.go:74] duration metric: took 15.160935ms to wait for pod list to return data ...
	I0920 19:06:08.031310  302538 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:06:08.035903  302538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:06:08.035953  302538 node_conditions.go:123] node cpu capacity is 2
	I0920 19:06:08.035968  302538 node_conditions.go:105] duration metric: took 4.652846ms to run NodePressure ...
	I0920 19:06:08.035995  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:08.404721  302538 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:06:08.409400  302538 kubeadm.go:739] kubelet initialised
	I0920 19:06:08.409423  302538 kubeadm.go:740] duration metric: took 4.670172ms waiting for restarted kubelet to initialise ...
	I0920 19:06:08.409432  302538 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:06:08.416547  302538 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:10.426817  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:08.107050  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:10.606744  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:09.236127  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:09.735809  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.236585  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.735863  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:11.236700  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:11.736557  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:12.236483  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:12.735695  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:13.235905  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:13.736128  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.785471  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:13.284642  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:12.923811  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.423162  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.926280  302538 pod_ready.go:93] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:15.926318  302538 pod_ready.go:82] duration metric: took 7.509740963s for pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.926332  302538 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.932683  302538 pod_ready.go:93] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:15.932713  302538 pod_ready.go:82] duration metric: took 6.372388ms for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.932725  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:13.111190  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.606371  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:14.236234  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:14.736677  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.236499  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.735667  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:16.235774  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:16.735833  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:17.236149  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:17.735782  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:18.236400  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:18.736460  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.784441  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:18.284748  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:17.938853  302538 pod_ready.go:103] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:19.939569  302538 pod_ready.go:103] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:18.104867  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:20.105870  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:22.605773  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:19.236298  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:19.736672  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.236401  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.735810  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:21.235673  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:21.736112  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:22.235998  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:22.736179  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:23.236680  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:23.736388  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.783320  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:22.783590  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:21.939753  302538 pod_ready.go:93] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:21.939781  302538 pod_ready.go:82] duration metric: took 6.007035191s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:21.939794  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.446396  302538 pod_ready.go:93] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.446425  302538 pod_ready.go:82] duration metric: took 506.622064ms for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.446435  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zzmkv" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.452105  302538 pod_ready.go:93] pod "kube-proxy-zzmkv" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.452130  302538 pod_ready.go:82] duration metric: took 5.688419ms for pod "kube-proxy-zzmkv" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.452139  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.456181  302538 pod_ready.go:93] pod "kube-scheduler-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.456205  302538 pod_ready.go:82] duration metric: took 4.05917ms for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.456215  302538 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:24.463262  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:24.606021  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:27.105497  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:24.236369  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:24.736082  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:25.236694  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:25.736346  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.236075  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.736666  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:27.236418  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:27.736656  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:28.235972  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:28.735743  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:24.783673  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:26.783960  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.283970  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:26.962413  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.462423  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.606628  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:32.105603  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.236688  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:29.736132  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:30.236404  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:30.735733  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.236364  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.736031  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:32.236457  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:32.735751  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:33.236371  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:33.736474  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.284572  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:33.286630  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:31.464686  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:33.962309  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:35.963445  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:34.105897  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:36.605140  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:34.236387  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:34.236472  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:34.276702  303486 cri.go:89] found id: ""
	I0920 19:06:34.276735  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.276747  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:34.276758  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:34.276815  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:34.312886  303486 cri.go:89] found id: ""
	I0920 19:06:34.312923  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.312935  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:34.312950  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:34.313024  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:34.347199  303486 cri.go:89] found id: ""
	I0920 19:06:34.347240  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.347250  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:34.347258  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:34.347332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:34.383077  303486 cri.go:89] found id: ""
	I0920 19:06:34.383110  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.383121  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:34.383130  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:34.383202  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:34.421184  303486 cri.go:89] found id: ""
	I0920 19:06:34.421212  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.421222  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:34.421231  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:34.421304  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:34.459964  303486 cri.go:89] found id: ""
	I0920 19:06:34.459998  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.460009  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:34.460018  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:34.460085  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:34.493761  303486 cri.go:89] found id: ""
	I0920 19:06:34.493803  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.493815  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:34.493824  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:34.493894  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:34.534406  303486 cri.go:89] found id: ""
	I0920 19:06:34.534445  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.534457  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:34.534471  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:34.534496  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:34.607256  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:34.607297  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:34.644923  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:34.644953  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:34.693574  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:34.693622  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:34.707703  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:34.707742  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:34.846809  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:37.347895  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:37.377651  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:37.377728  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:37.430034  303486 cri.go:89] found id: ""
	I0920 19:06:37.430071  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.430079  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:37.430087  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:37.430156  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:37.467026  303486 cri.go:89] found id: ""
	I0920 19:06:37.467055  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.467063  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:37.467069  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:37.467120  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:37.505791  303486 cri.go:89] found id: ""
	I0920 19:06:37.505824  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.505835  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:37.505845  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:37.505943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:37.541519  303486 cri.go:89] found id: ""
	I0920 19:06:37.541556  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.541568  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:37.541577  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:37.541633  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:37.576088  303486 cri.go:89] found id: ""
	I0920 19:06:37.576126  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.576137  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:37.576146  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:37.576204  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:37.613039  303486 cri.go:89] found id: ""
	I0920 19:06:37.613074  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.613084  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:37.613091  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:37.613153  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:37.656440  303486 cri.go:89] found id: ""
	I0920 19:06:37.656473  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.656482  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:37.656489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:37.656555  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:37.693247  303486 cri.go:89] found id: ""
	I0920 19:06:37.693283  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.693292  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:37.693302  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:37.693321  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:37.769230  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:37.769280  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:37.811016  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:37.811058  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:37.865729  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:37.865773  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:37.880056  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:37.880094  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:37.956402  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:35.783789  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:37.787063  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:38.461824  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.465028  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:38.605494  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.605606  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.457303  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:40.473769  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:40.473848  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:40.511320  303486 cri.go:89] found id: ""
	I0920 19:06:40.511354  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.511363  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:40.511371  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:40.511433  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:40.547086  303486 cri.go:89] found id: ""
	I0920 19:06:40.547127  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.547138  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:40.547147  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:40.547216  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:40.580969  303486 cri.go:89] found id: ""
	I0920 19:06:40.581010  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.581022  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:40.581035  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:40.581098  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:40.615802  303486 cri.go:89] found id: ""
	I0920 19:06:40.615842  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.615851  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:40.615858  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:40.615931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:40.649398  303486 cri.go:89] found id: ""
	I0920 19:06:40.649444  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.649459  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:40.649467  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:40.649541  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:40.683124  303486 cri.go:89] found id: ""
	I0920 19:06:40.683160  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.683172  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:40.683181  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:40.683251  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:40.718005  303486 cri.go:89] found id: ""
	I0920 19:06:40.718032  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.718040  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:40.718047  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:40.718107  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:40.751965  303486 cri.go:89] found id: ""
	I0920 19:06:40.751992  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.752000  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:40.752010  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:40.752024  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:40.765195  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:40.765234  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:40.842287  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:40.842321  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:40.842338  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:40.928384  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:40.928430  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:40.970207  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:40.970242  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:43.526435  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:43.540582  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:43.540680  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:43.576798  303486 cri.go:89] found id: ""
	I0920 19:06:43.576837  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.576846  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:43.576852  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:43.576916  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:43.615261  303486 cri.go:89] found id: ""
	I0920 19:06:43.615290  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.615298  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:43.615305  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:43.615359  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:43.651214  303486 cri.go:89] found id: ""
	I0920 19:06:43.651251  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.651264  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:43.651277  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:43.651338  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:43.684483  303486 cri.go:89] found id: ""
	I0920 19:06:43.684523  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.684535  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:43.684544  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:43.684614  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:43.720996  303486 cri.go:89] found id: ""
	I0920 19:06:43.721026  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.721035  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:43.721041  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:43.721107  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:43.764445  303486 cri.go:89] found id: ""
	I0920 19:06:43.764482  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.764493  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:43.764501  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:43.764564  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:43.808848  303486 cri.go:89] found id: ""
	I0920 19:06:43.808878  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.808888  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:43.808897  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:43.808968  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:43.845462  303486 cri.go:89] found id: ""
	I0920 19:06:43.845491  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.845500  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:43.845511  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:43.845525  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:43.896550  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:43.896596  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:43.909243  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:43.909272  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:06:40.284735  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:42.783363  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:42.962289  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:44.963071  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:43.106353  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:45.606296  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	W0920 19:06:43.987455  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:43.987474  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:43.987491  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:44.063585  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:44.063629  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:46.602859  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:46.617286  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:46.617357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:46.653643  303486 cri.go:89] found id: ""
	I0920 19:06:46.653681  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.653693  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:46.653702  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:46.653778  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:46.691169  303486 cri.go:89] found id: ""
	I0920 19:06:46.691198  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.691206  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:46.691213  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:46.691271  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:46.725498  303486 cri.go:89] found id: ""
	I0920 19:06:46.725527  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.725538  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:46.725545  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:46.725614  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:46.758850  303486 cri.go:89] found id: ""
	I0920 19:06:46.758876  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.758884  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:46.758891  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:46.758942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:46.793648  303486 cri.go:89] found id: ""
	I0920 19:06:46.793683  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.793692  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:46.793699  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:46.793755  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:46.832908  303486 cri.go:89] found id: ""
	I0920 19:06:46.832940  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.832947  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:46.832953  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:46.833019  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:46.866450  303486 cri.go:89] found id: ""
	I0920 19:06:46.866502  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.866513  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:46.866522  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:46.866593  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:46.901966  303486 cri.go:89] found id: ""
	I0920 19:06:46.902001  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.902013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:46.902026  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:46.902041  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:46.948901  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:46.948946  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:46.963489  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:46.963534  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:47.041701  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:47.041722  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:47.041736  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:47.124320  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:47.124364  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:44.783818  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:46.784000  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:48.785175  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:46.963700  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:49.462018  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:48.104361  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:50.105520  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:52.605799  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:49.664255  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:49.677240  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:49.677322  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:49.712375  303486 cri.go:89] found id: ""
	I0920 19:06:49.712401  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.712409  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:49.712415  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:49.712476  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:49.747682  303486 cri.go:89] found id: ""
	I0920 19:06:49.747713  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.747721  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:49.747727  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:49.747783  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:49.782276  303486 cri.go:89] found id: ""
	I0920 19:06:49.782319  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.782329  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:49.782337  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:49.782400  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:49.822625  303486 cri.go:89] found id: ""
	I0920 19:06:49.822661  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.822672  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:49.822680  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:49.822751  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:49.862159  303486 cri.go:89] found id: ""
	I0920 19:06:49.862192  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.862202  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:49.862212  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:49.862281  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:49.897552  303486 cri.go:89] found id: ""
	I0920 19:06:49.897587  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.897595  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:49.897608  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:49.897667  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:49.931667  303486 cri.go:89] found id: ""
	I0920 19:06:49.931698  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.931709  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:49.931718  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:49.931774  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:49.969206  303486 cri.go:89] found id: ""
	I0920 19:06:49.969236  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.969244  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:49.969254  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:49.969266  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:50.019287  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:50.019328  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:50.033080  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:50.033113  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:50.106415  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:50.106442  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:50.106459  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:50.183710  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:50.183762  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:52.725443  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:52.739293  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:52.739386  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:52.772412  303486 cri.go:89] found id: ""
	I0920 19:06:52.772445  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.772454  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:52.772461  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:52.772528  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:52.811153  303486 cri.go:89] found id: ""
	I0920 19:06:52.811189  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.811197  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:52.811204  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:52.811260  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:52.848709  303486 cri.go:89] found id: ""
	I0920 19:06:52.848740  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.848749  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:52.848755  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:52.848811  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:52.883358  303486 cri.go:89] found id: ""
	I0920 19:06:52.883387  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.883394  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:52.883400  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:52.883455  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:52.917838  303486 cri.go:89] found id: ""
	I0920 19:06:52.917874  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.917893  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:52.917912  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:52.917982  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:52.952340  303486 cri.go:89] found id: ""
	I0920 19:06:52.952378  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.952387  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:52.952396  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:52.952471  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:52.986433  303486 cri.go:89] found id: ""
	I0920 19:06:52.986469  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.986478  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:52.986486  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:52.986582  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:53.024209  303486 cri.go:89] found id: ""
	I0920 19:06:53.024241  303486 logs.go:276] 0 containers: []
	W0920 19:06:53.024249  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:53.024260  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:53.024272  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:53.075336  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:53.075374  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:53.090761  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:53.090802  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:53.167883  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:53.167915  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:53.167933  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:53.242003  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:53.242044  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:50.785624  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:53.284212  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:51.462197  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:53.962545  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:55.962875  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:54.607806  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:57.105146  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:55.779107  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:55.793713  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:55.793802  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:55.829411  303486 cri.go:89] found id: ""
	I0920 19:06:55.829441  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.829450  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:55.829456  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:55.829513  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:55.864578  303486 cri.go:89] found id: ""
	I0920 19:06:55.864606  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.864617  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:55.864625  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:55.864686  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:55.897004  303486 cri.go:89] found id: ""
	I0920 19:06:55.897033  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.897041  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:55.897048  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:55.897106  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:55.931019  303486 cri.go:89] found id: ""
	I0920 19:06:55.931055  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.931066  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:55.931076  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:55.931141  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:55.966595  303486 cri.go:89] found id: ""
	I0920 19:06:55.966625  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.966635  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:55.966643  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:55.966693  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:55.999707  303486 cri.go:89] found id: ""
	I0920 19:06:55.999736  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.999747  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:55.999756  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:55.999825  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:56.034323  303486 cri.go:89] found id: ""
	I0920 19:06:56.034361  303486 logs.go:276] 0 containers: []
	W0920 19:06:56.034371  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:56.034377  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:56.034433  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:56.069019  303486 cri.go:89] found id: ""
	I0920 19:06:56.069048  303486 logs.go:276] 0 containers: []
	W0920 19:06:56.069056  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:56.069066  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:56.069077  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:56.122820  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:56.122860  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:56.136924  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:56.136966  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:56.216255  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:56.216284  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:56.216299  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:56.293461  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:56.293506  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:58.831252  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:58.844410  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:58.844474  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:58.877508  303486 cri.go:89] found id: ""
	I0920 19:06:58.877539  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.877547  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:58.877555  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:58.877613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:58.911284  303486 cri.go:89] found id: ""
	I0920 19:06:58.911315  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.911323  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:58.911329  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:58.911382  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:58.944646  303486 cri.go:89] found id: ""
	I0920 19:06:58.944675  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.944682  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:58.944688  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:58.944739  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:55.784379  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.283450  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.461839  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:00.461977  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:59.108066  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:01.605247  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.979752  303486 cri.go:89] found id: ""
	I0920 19:06:58.979787  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.979798  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:58.979807  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:58.979864  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:59.016613  303486 cri.go:89] found id: ""
	I0920 19:06:59.016649  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.016661  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:59.016670  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:59.016735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:59.052012  303486 cri.go:89] found id: ""
	I0920 19:06:59.052039  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.052047  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:59.052054  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:59.052106  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:59.090102  303486 cri.go:89] found id: ""
	I0920 19:06:59.090140  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.090152  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:59.090159  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:59.090213  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:59.128028  303486 cri.go:89] found id: ""
	I0920 19:06:59.128057  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.128068  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:59.128080  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:59.128096  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:59.142966  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:59.143012  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:59.227311  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:59.227336  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:59.227357  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:59.308319  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:59.308366  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:59.347299  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:59.347336  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:01.897644  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:01.912876  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:01.912951  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:01.956550  303486 cri.go:89] found id: ""
	I0920 19:07:01.956679  303486 logs.go:276] 0 containers: []
	W0920 19:07:01.956690  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:01.956700  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:01.956765  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:01.995391  303486 cri.go:89] found id: ""
	I0920 19:07:01.995425  303486 logs.go:276] 0 containers: []
	W0920 19:07:01.995433  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:01.995440  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:01.995501  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:02.031149  303486 cri.go:89] found id: ""
	I0920 19:07:02.031181  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.031193  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:02.031202  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:02.031273  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:02.065856  303486 cri.go:89] found id: ""
	I0920 19:07:02.065885  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.065894  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:02.065924  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:02.065981  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:02.101974  303486 cri.go:89] found id: ""
	I0920 19:07:02.102018  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.102032  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:02.102041  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:02.102115  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:02.138108  303486 cri.go:89] found id: ""
	I0920 19:07:02.138142  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.138151  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:02.138156  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:02.138217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:02.170136  303486 cri.go:89] found id: ""
	I0920 19:07:02.170165  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.170173  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:02.170179  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:02.170244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:02.203944  303486 cri.go:89] found id: ""
	I0920 19:07:02.203969  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.203978  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:02.203991  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:02.204008  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:02.256635  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:02.256679  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:02.270266  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:02.270303  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:02.341145  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:02.341182  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:02.341199  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:02.415133  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:02.415175  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:00.283726  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:02.285304  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:02.462310  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:04.462872  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:03.605300  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:06.105872  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:04.952448  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:04.966632  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:04.966702  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:05.001098  303486 cri.go:89] found id: ""
	I0920 19:07:05.001131  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.001141  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:05.001149  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:05.001217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:05.038160  303486 cri.go:89] found id: ""
	I0920 19:07:05.038186  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.038196  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:05.038202  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:05.038260  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:05.083301  303486 cri.go:89] found id: ""
	I0920 19:07:05.083346  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.083357  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:05.083365  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:05.083436  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:05.118916  303486 cri.go:89] found id: ""
	I0920 19:07:05.118952  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.118964  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:05.118972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:05.119065  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:05.157452  303486 cri.go:89] found id: ""
	I0920 19:07:05.157485  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.157496  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:05.157511  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:05.157587  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:05.197100  303486 cri.go:89] found id: ""
	I0920 19:07:05.197133  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.197143  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:05.197152  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:05.197225  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:05.231286  303486 cri.go:89] found id: ""
	I0920 19:07:05.231317  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.231328  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:05.231336  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:05.231409  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:05.269798  303486 cri.go:89] found id: ""
	I0920 19:07:05.269835  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.269847  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:05.269862  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:05.269882  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:05.310029  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:05.310068  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:05.360493  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:05.360537  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:05.373771  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:05.373815  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:05.449860  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:05.449886  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:05.449924  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:08.034520  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:08.049970  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:08.050040  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:08.084683  303486 cri.go:89] found id: ""
	I0920 19:07:08.084714  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.084724  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:08.084731  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:08.084799  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:08.121150  303486 cri.go:89] found id: ""
	I0920 19:07:08.121176  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.121183  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:08.121190  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:08.121244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:08.157830  303486 cri.go:89] found id: ""
	I0920 19:07:08.157865  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.157877  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:08.157891  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:08.157967  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:08.191040  303486 cri.go:89] found id: ""
	I0920 19:07:08.191082  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.191094  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:08.191102  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:08.191169  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:08.230194  303486 cri.go:89] found id: ""
	I0920 19:07:08.230230  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.230239  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:08.230246  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:08.230304  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:08.268526  303486 cri.go:89] found id: ""
	I0920 19:07:08.268558  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.268566  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:08.268573  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:08.268631  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:08.302383  303486 cri.go:89] found id: ""
	I0920 19:07:08.302411  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.302420  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:08.302428  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:08.302492  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:08.336435  303486 cri.go:89] found id: ""
	I0920 19:07:08.336469  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.336479  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:08.336491  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:08.336505  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:08.418086  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:08.418129  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:08.458355  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:08.458391  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:08.507017  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:08.507062  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:08.522701  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:08.522737  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:08.592777  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:04.784475  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:07.283612  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:09.286218  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:06.963106  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:09.462861  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:08.108458  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:10.605447  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:12.605992  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:11.093689  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:11.107438  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:11.107503  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:11.139701  303486 cri.go:89] found id: ""
	I0920 19:07:11.139742  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.139755  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:11.139765  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:11.139822  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:11.196143  303486 cri.go:89] found id: ""
	I0920 19:07:11.196182  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.196191  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:11.196197  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:11.196268  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:11.232121  303486 cri.go:89] found id: ""
	I0920 19:07:11.232156  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.232164  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:11.232171  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:11.232238  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:11.267307  303486 cri.go:89] found id: ""
	I0920 19:07:11.267338  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.267349  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:11.267358  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:11.267423  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:11.306583  303486 cri.go:89] found id: ""
	I0920 19:07:11.306614  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.306623  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:11.306631  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:11.306698  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:11.348162  303486 cri.go:89] found id: ""
	I0920 19:07:11.348188  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.348196  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:11.348203  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:11.348257  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:11.383612  303486 cri.go:89] found id: ""
	I0920 19:07:11.383649  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.383660  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:11.383669  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:11.383736  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:11.417538  303486 cri.go:89] found id: ""
	I0920 19:07:11.417575  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.417583  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:11.417593  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:11.417609  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:11.470242  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:11.470282  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:11.485448  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:11.485480  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:11.559466  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:11.559495  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:11.559513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:11.636080  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:11.636133  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:11.783461  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:13.783785  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:11.462940  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:13.963340  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:14.609611  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:17.105222  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:14.177278  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:14.190413  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:14.190483  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:14.224238  303486 cri.go:89] found id: ""
	I0920 19:07:14.224264  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.224272  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:14.224278  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:14.224330  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:14.265253  303486 cri.go:89] found id: ""
	I0920 19:07:14.265285  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.265297  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:14.265304  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:14.265357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:14.300591  303486 cri.go:89] found id: ""
	I0920 19:07:14.300619  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.300633  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:14.300639  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:14.300695  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:14.335638  303486 cri.go:89] found id: ""
	I0920 19:07:14.335669  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.335677  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:14.335683  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:14.335735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:14.369291  303486 cri.go:89] found id: ""
	I0920 19:07:14.369328  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.369336  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:14.369344  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:14.369397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:14.404913  303486 cri.go:89] found id: ""
	I0920 19:07:14.404947  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.404958  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:14.404967  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:14.405034  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:14.438793  303486 cri.go:89] found id: ""
	I0920 19:07:14.438834  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.438845  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:14.438856  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:14.438926  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:14.475268  303486 cri.go:89] found id: ""
	I0920 19:07:14.475297  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.475305  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:14.475321  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:14.475342  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:14.528066  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:14.528126  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:14.542850  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:14.542891  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:14.612772  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:14.612800  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:14.612819  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:14.694528  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:14.694579  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:17.234389  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:17.247479  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:17.247544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:17.285461  303486 cri.go:89] found id: ""
	I0920 19:07:17.285488  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.285496  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:17.285502  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:17.285553  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:17.320580  303486 cri.go:89] found id: ""
	I0920 19:07:17.320606  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.320614  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:17.320620  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:17.320677  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:17.356405  303486 cri.go:89] found id: ""
	I0920 19:07:17.356440  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.356462  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:17.356471  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:17.356526  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:17.391268  303486 cri.go:89] found id: ""
	I0920 19:07:17.391301  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.391309  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:17.391316  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:17.391381  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:17.429886  303486 cri.go:89] found id: ""
	I0920 19:07:17.429938  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.429950  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:17.429959  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:17.430022  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:17.466059  303486 cri.go:89] found id: ""
	I0920 19:07:17.466093  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.466104  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:17.466111  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:17.466176  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:17.501128  303486 cri.go:89] found id: ""
	I0920 19:07:17.501159  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.501168  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:17.501174  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:17.501247  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:17.536969  303486 cri.go:89] found id: ""
	I0920 19:07:17.536999  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.537007  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:17.537016  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:17.537031  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:17.592071  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:17.592119  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:17.609022  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:17.609057  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:17.696393  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:17.696420  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:17.696434  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:17.778077  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:17.778122  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:15.785002  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:18.284101  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:16.463809  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:18.964348  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:19.604758  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:21.608192  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:20.319211  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:20.332158  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:20.332235  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:20.366195  303486 cri.go:89] found id: ""
	I0920 19:07:20.366230  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.366241  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:20.366250  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:20.366313  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:20.401786  303486 cri.go:89] found id: ""
	I0920 19:07:20.401819  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.401829  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:20.401846  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:20.401943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:20.433684  303486 cri.go:89] found id: ""
	I0920 19:07:20.433711  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.433719  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:20.433725  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:20.433783  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:20.469495  303486 cri.go:89] found id: ""
	I0920 19:07:20.469524  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.469535  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:20.469543  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:20.469613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:20.502214  303486 cri.go:89] found id: ""
	I0920 19:07:20.502245  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.502256  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:20.502263  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:20.502329  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:20.535829  303486 cri.go:89] found id: ""
	I0920 19:07:20.535867  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.535879  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:20.535887  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:20.535952  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:20.569605  303486 cri.go:89] found id: ""
	I0920 19:07:20.569635  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.569643  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:20.569654  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:20.569726  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:20.603676  303486 cri.go:89] found id: ""
	I0920 19:07:20.603699  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.603706  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:20.603715  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:20.603726  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:20.656645  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:20.656692  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:20.671077  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:20.671107  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:20.740996  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:20.741028  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:20.741046  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:20.820541  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:20.820592  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:23.362973  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:23.380350  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:23.380432  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:23.423145  303486 cri.go:89] found id: ""
	I0920 19:07:23.423183  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.423193  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:23.423202  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:23.423272  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:23.459019  303486 cri.go:89] found id: ""
	I0920 19:07:23.459057  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.459068  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:23.459077  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:23.459144  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:23.502876  303486 cri.go:89] found id: ""
	I0920 19:07:23.502908  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.502920  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:23.502929  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:23.502994  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:23.538440  303486 cri.go:89] found id: ""
	I0920 19:07:23.538471  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.538481  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:23.538489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:23.538552  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:23.575164  303486 cri.go:89] found id: ""
	I0920 19:07:23.575199  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.575211  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:23.575220  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:23.575296  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:23.610449  303486 cri.go:89] found id: ""
	I0920 19:07:23.610480  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.610489  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:23.610495  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:23.610562  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:23.644164  303486 cri.go:89] found id: ""
	I0920 19:07:23.644195  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.644203  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:23.644209  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:23.644275  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:23.684379  303486 cri.go:89] found id: ""
	I0920 19:07:23.684417  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.684428  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:23.684442  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:23.684459  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:23.762838  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:23.762885  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:23.805616  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:23.805650  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:23.857080  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:23.857122  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:23.870602  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:23.870635  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:23.941187  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:20.284264  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:22.284388  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:24.285108  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:21.462493  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:23.467933  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:25.963071  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:24.106087  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:26.605442  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:26.441571  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:26.455091  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:26.455185  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:26.489658  303486 cri.go:89] found id: ""
	I0920 19:07:26.489696  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.489707  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:26.489716  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:26.489773  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:26.528829  303486 cri.go:89] found id: ""
	I0920 19:07:26.528865  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.528878  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:26.528886  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:26.528966  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:26.568402  303486 cri.go:89] found id: ""
	I0920 19:07:26.568429  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.568443  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:26.568450  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:26.568503  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:26.606654  303486 cri.go:89] found id: ""
	I0920 19:07:26.606683  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.606693  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:26.606701  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:26.606764  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:26.640825  303486 cri.go:89] found id: ""
	I0920 19:07:26.640856  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.640864  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:26.640871  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:26.640934  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:26.677023  303486 cri.go:89] found id: ""
	I0920 19:07:26.677054  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.677062  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:26.677068  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:26.677123  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:26.712921  303486 cri.go:89] found id: ""
	I0920 19:07:26.712956  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.712964  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:26.712971  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:26.713031  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:26.747750  303486 cri.go:89] found id: ""
	I0920 19:07:26.747778  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.747786  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:26.747796  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:26.747810  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:26.799240  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:26.799283  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:26.813197  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:26.813233  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:26.882751  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:26.882780  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:26.882799  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:26.965108  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:26.965146  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:26.784306  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:29.283573  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:28.461526  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:30.462242  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:28.606602  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:31.106657  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:29.503960  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:29.516601  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:29.516669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:29.555581  303486 cri.go:89] found id: ""
	I0920 19:07:29.555622  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.555632  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:29.555640  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:29.555711  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:29.593858  303486 cri.go:89] found id: ""
	I0920 19:07:29.593885  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.593923  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:29.593937  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:29.593990  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:29.629507  303486 cri.go:89] found id: ""
	I0920 19:07:29.629538  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.629548  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:29.629557  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:29.629616  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:29.662880  303486 cri.go:89] found id: ""
	I0920 19:07:29.662913  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.662921  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:29.662928  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:29.662976  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:29.695422  303486 cri.go:89] found id: ""
	I0920 19:07:29.695448  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.695458  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:29.695466  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:29.695531  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:29.730641  303486 cri.go:89] found id: ""
	I0920 19:07:29.730673  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.730685  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:29.730693  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:29.730756  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:29.764186  303486 cri.go:89] found id: ""
	I0920 19:07:29.764220  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.764229  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:29.764238  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:29.764302  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:29.804146  303486 cri.go:89] found id: ""
	I0920 19:07:29.804174  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.804182  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:29.804191  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:29.804204  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:29.885573  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:29.885633  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:29.924619  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:29.924667  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:29.978187  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:29.978230  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:29.992161  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:29.992190  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:30.069767  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:32.570197  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:32.583160  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:32.583244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:32.620842  303486 cri.go:89] found id: ""
	I0920 19:07:32.620870  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.620881  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:32.620899  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:32.620958  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:32.657169  303486 cri.go:89] found id: ""
	I0920 19:07:32.657205  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.657216  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:32.657225  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:32.657292  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:32.694773  303486 cri.go:89] found id: ""
	I0920 19:07:32.694802  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.694809  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:32.694815  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:32.694882  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:32.733318  303486 cri.go:89] found id: ""
	I0920 19:07:32.733350  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.733360  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:32.733370  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:32.733436  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:32.766019  303486 cri.go:89] found id: ""
	I0920 19:07:32.766052  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.766062  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:32.766070  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:32.766138  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:32.801412  303486 cri.go:89] found id: ""
	I0920 19:07:32.801443  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.801454  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:32.801463  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:32.801533  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:32.833743  303486 cri.go:89] found id: ""
	I0920 19:07:32.833771  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.833779  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:32.833787  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:32.833847  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:32.866775  303486 cri.go:89] found id: ""
	I0920 19:07:32.866803  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.866811  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:32.866821  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:32.866839  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:32.919257  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:32.919310  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:32.933554  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:32.933602  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:33.002657  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:33.002702  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:33.002721  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:33.081271  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:33.081316  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:31.284488  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:33.782998  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:32.462645  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:34.963285  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:33.609072  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:36.107460  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:35.627131  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:35.640958  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:35.641032  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:35.675943  303486 cri.go:89] found id: ""
	I0920 19:07:35.675976  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.675984  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:35.675991  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:35.676044  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:35.710075  303486 cri.go:89] found id: ""
	I0920 19:07:35.710104  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.710116  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:35.710124  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:35.710194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:35.747890  303486 cri.go:89] found id: ""
	I0920 19:07:35.747920  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.747931  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:35.747939  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:35.748004  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:35.786197  303486 cri.go:89] found id: ""
	I0920 19:07:35.786231  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.786242  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:35.786252  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:35.786314  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:35.819109  303486 cri.go:89] found id: ""
	I0920 19:07:35.819146  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.819158  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:35.819168  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:35.819244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:35.853244  303486 cri.go:89] found id: ""
	I0920 19:07:35.853282  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.853292  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:35.853301  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:35.853378  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:35.886864  303486 cri.go:89] found id: ""
	I0920 19:07:35.886897  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.886908  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:35.886917  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:35.886986  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:35.920872  303486 cri.go:89] found id: ""
	I0920 19:07:35.920906  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.920917  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:35.920939  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:35.920957  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:35.998741  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:35.998794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:36.040681  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:36.040720  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:36.095848  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:36.095909  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:36.110903  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:36.110939  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:36.186658  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:38.687762  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:38.701640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:38.701708  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:38.734908  303486 cri.go:89] found id: ""
	I0920 19:07:38.734946  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.734956  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:38.734966  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:38.735031  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:38.768062  303486 cri.go:89] found id: ""
	I0920 19:07:38.768100  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.768112  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:38.768120  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:38.768188  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:38.800881  303486 cri.go:89] found id: ""
	I0920 19:07:38.800915  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.800927  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:38.800936  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:38.801004  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:38.835119  303486 cri.go:89] found id: ""
	I0920 19:07:38.835148  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.835156  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:38.835164  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:38.835223  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:38.872677  303486 cri.go:89] found id: ""
	I0920 19:07:38.872712  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.872723  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:38.872733  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:38.872807  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:38.913921  303486 cri.go:89] found id: ""
	I0920 19:07:38.913955  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.913965  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:38.913972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:38.914029  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:35.783443  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.284549  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:36.963668  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.963893  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.608347  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:41.106313  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.951849  303486 cri.go:89] found id: ""
	I0920 19:07:38.951882  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.951893  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:38.951902  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:38.951972  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:38.988117  303486 cri.go:89] found id: ""
	I0920 19:07:38.988149  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.988161  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:38.988177  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:38.988191  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:39.028804  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:39.028843  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:39.083374  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:39.083427  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:39.097434  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:39.097463  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:39.172185  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:39.172213  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:39.172226  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:41.756648  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:41.772358  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:41.772432  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:41.809067  303486 cri.go:89] found id: ""
	I0920 19:07:41.809109  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.809123  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:41.809132  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:41.809191  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:41.853413  303486 cri.go:89] found id: ""
	I0920 19:07:41.853445  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.853457  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:41.853465  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:41.853524  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:41.891536  303486 cri.go:89] found id: ""
	I0920 19:07:41.891569  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.891580  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:41.891588  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:41.891668  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:41.931046  303486 cri.go:89] found id: ""
	I0920 19:07:41.931085  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.931093  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:41.931099  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:41.931155  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:41.968120  303486 cri.go:89] found id: ""
	I0920 19:07:41.968152  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.968164  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:41.968172  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:41.968240  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:42.002478  303486 cri.go:89] found id: ""
	I0920 19:07:42.002512  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.002523  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:42.002532  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:42.002599  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:42.038031  303486 cri.go:89] found id: ""
	I0920 19:07:42.038067  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.038080  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:42.038087  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:42.038150  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:42.072124  303486 cri.go:89] found id: ""
	I0920 19:07:42.072155  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.072166  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:42.072178  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:42.072195  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:42.128217  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:42.128259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:42.142291  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:42.142322  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:42.215278  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:42.215305  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:42.215324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:42.293431  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:42.293476  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:40.784191  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.283580  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:41.463429  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.963059  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.608790  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:46.105338  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:44.836094  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:44.850327  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:44.850397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:44.884595  303486 cri.go:89] found id: ""
	I0920 19:07:44.884624  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.884632  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:44.884639  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:44.884711  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:44.917727  303486 cri.go:89] found id: ""
	I0920 19:07:44.917754  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.917763  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:44.917769  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:44.917837  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:44.955821  303486 cri.go:89] found id: ""
	I0920 19:07:44.955860  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.955871  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:44.955879  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:44.955937  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:44.994543  303486 cri.go:89] found id: ""
	I0920 19:07:44.994579  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.994590  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:44.994598  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:44.994651  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:45.031839  303486 cri.go:89] found id: ""
	I0920 19:07:45.031877  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.031888  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:45.031896  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:45.031962  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:45.070554  303486 cri.go:89] found id: ""
	I0920 19:07:45.070588  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.070601  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:45.070609  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:45.070678  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:45.108727  303486 cri.go:89] found id: ""
	I0920 19:07:45.108760  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.108771  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:45.108779  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:45.108855  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:45.144045  303486 cri.go:89] found id: ""
	I0920 19:07:45.144075  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.144083  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:45.144094  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:45.144108  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:45.185800  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:45.185834  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:45.238364  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:45.238410  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:45.252111  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:45.252145  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:45.329009  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:45.329036  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:45.329051  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:47.912910  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:47.926378  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:47.926458  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:47.961067  303486 cri.go:89] found id: ""
	I0920 19:07:47.961094  303486 logs.go:276] 0 containers: []
	W0920 19:07:47.961103  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:47.961111  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:47.961172  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:48.006680  303486 cri.go:89] found id: ""
	I0920 19:07:48.006717  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.006729  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:48.006738  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:48.006805  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:48.042230  303486 cri.go:89] found id: ""
	I0920 19:07:48.042261  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.042272  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:48.042281  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:48.042349  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:48.080779  303486 cri.go:89] found id: ""
	I0920 19:07:48.080836  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.080850  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:48.080860  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:48.080931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:48.119439  303486 cri.go:89] found id: ""
	I0920 19:07:48.119469  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.119477  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:48.119483  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:48.119536  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:48.156219  303486 cri.go:89] found id: ""
	I0920 19:07:48.156258  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.156269  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:48.156279  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:48.156354  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:48.192112  303486 cri.go:89] found id: ""
	I0920 19:07:48.192151  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.192162  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:48.192170  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:48.192240  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:48.228916  303486 cri.go:89] found id: ""
	I0920 19:07:48.228958  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.228968  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:48.228981  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:48.229003  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:48.284073  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:48.284115  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:48.297677  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:48.297713  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:48.374834  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:48.374860  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:48.374876  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:48.455468  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:48.455512  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:45.284055  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:47.783744  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:46.461832  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:48.462980  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:50.463485  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:48.605035  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:51.105952  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:50.998354  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:51.012827  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:51.012904  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:51.046701  303486 cri.go:89] found id: ""
	I0920 19:07:51.046739  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.046750  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:51.046758  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:51.046827  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:51.083829  303486 cri.go:89] found id: ""
	I0920 19:07:51.083867  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.083878  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:51.083891  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:51.083965  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:51.124126  303486 cri.go:89] found id: ""
	I0920 19:07:51.124170  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.124180  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:51.124187  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:51.124254  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:51.159141  303486 cri.go:89] found id: ""
	I0920 19:07:51.159175  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.159184  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:51.159190  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:51.159253  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:51.192793  303486 cri.go:89] found id: ""
	I0920 19:07:51.192829  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.192840  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:51.192863  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:51.192938  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:51.225489  303486 cri.go:89] found id: ""
	I0920 19:07:51.225515  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.225524  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:51.225530  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:51.225582  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:51.258256  303486 cri.go:89] found id: ""
	I0920 19:07:51.258283  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.258294  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:51.258301  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:51.258363  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:51.292474  303486 cri.go:89] found id: ""
	I0920 19:07:51.292504  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.292512  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:51.292522  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:51.292537  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:51.331386  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:51.331422  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:51.385136  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:51.385182  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:51.400792  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:51.400828  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:51.492771  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:51.492795  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:51.492810  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:49.784132  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:52.284075  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:54.284870  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:52.963813  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:55.464095  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:53.607259  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:56.106592  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:54.074889  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:54.088453  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:54.088534  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:54.125096  303486 cri.go:89] found id: ""
	I0920 19:07:54.125138  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.125159  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:54.125166  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:54.125231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:54.159630  303486 cri.go:89] found id: ""
	I0920 19:07:54.159665  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.159676  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:54.159685  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:54.159759  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:54.195919  303486 cri.go:89] found id: ""
	I0920 19:07:54.195951  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.195965  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:54.195972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:54.196042  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:54.230294  303486 cri.go:89] found id: ""
	I0920 19:07:54.230323  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.230332  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:54.230339  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:54.230396  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:54.266764  303486 cri.go:89] found id: ""
	I0920 19:07:54.266793  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.266800  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:54.266807  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:54.266865  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:54.300704  303486 cri.go:89] found id: ""
	I0920 19:07:54.300731  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.300741  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:54.300750  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:54.300817  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:54.334447  303486 cri.go:89] found id: ""
	I0920 19:07:54.334473  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.334480  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:54.334487  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:54.334546  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:54.369814  303486 cri.go:89] found id: ""
	I0920 19:07:54.369858  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.369866  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:54.369878  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:54.369890  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:54.423088  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:54.423135  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:54.436770  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:54.436801  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:54.510731  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:54.510757  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:54.510773  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:54.593041  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:54.593091  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:57.134030  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:57.147605  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:57.147674  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:57.202662  303486 cri.go:89] found id: ""
	I0920 19:07:57.202690  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.202699  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:57.202705  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:57.202757  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:57.236448  303486 cri.go:89] found id: ""
	I0920 19:07:57.236476  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.236484  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:57.236493  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:57.236558  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:57.269450  303486 cri.go:89] found id: ""
	I0920 19:07:57.269478  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.269485  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:57.269491  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:57.269544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:57.305749  303486 cri.go:89] found id: ""
	I0920 19:07:57.305784  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.305795  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:57.305806  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:57.305877  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:57.339802  303486 cri.go:89] found id: ""
	I0920 19:07:57.339844  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.339857  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:57.339866  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:57.339942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:57.371929  303486 cri.go:89] found id: ""
	I0920 19:07:57.371962  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.371971  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:57.371980  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:57.372051  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:57.405749  303486 cri.go:89] found id: ""
	I0920 19:07:57.405789  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.405802  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:57.405812  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:57.405888  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:57.439259  303486 cri.go:89] found id: ""
	I0920 19:07:57.439291  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.439300  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:57.439310  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:57.439323  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:57.491405  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:57.491450  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:57.505992  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:57.506027  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:57.580598  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:57.580623  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:57.580638  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:57.659475  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:57.659513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:56.783867  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:58.783944  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:57.465789  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:59.963589  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:58.606492  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:01.105967  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:00.201478  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:00.217162  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:00.217228  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:00.252219  303486 cri.go:89] found id: ""
	I0920 19:08:00.252247  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.252256  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:00.252263  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:00.252334  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:00.287244  303486 cri.go:89] found id: ""
	I0920 19:08:00.287283  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.287295  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:00.287302  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:00.287367  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:00.325785  303486 cri.go:89] found id: ""
	I0920 19:08:00.325818  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.325829  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:00.325839  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:00.325931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:00.359718  303486 cri.go:89] found id: ""
	I0920 19:08:00.359747  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.359757  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:00.359766  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:00.359847  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:00.399105  303486 cri.go:89] found id: ""
	I0920 19:08:00.399147  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.399156  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:00.399163  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:00.399227  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:00.433647  303486 cri.go:89] found id: ""
	I0920 19:08:00.433675  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.433683  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:00.433692  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:00.433756  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:00.467771  303486 cri.go:89] found id: ""
	I0920 19:08:00.467820  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.467832  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:00.467841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:00.467911  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:00.511320  303486 cri.go:89] found id: ""
	I0920 19:08:00.511363  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.511376  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:00.511392  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:00.511414  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:00.594669  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:00.594703  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:00.594723  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:00.672747  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:00.672800  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:00.710001  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:00.710049  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:00.760333  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:00.760378  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:03.274393  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:03.289260  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:03.289352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:03.327884  303486 cri.go:89] found id: ""
	I0920 19:08:03.327919  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.327932  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:03.327942  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:03.328015  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:03.367259  303486 cri.go:89] found id: ""
	I0920 19:08:03.367289  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.367297  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:03.367303  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:03.367361  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:03.405843  303486 cri.go:89] found id: ""
	I0920 19:08:03.405899  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.405932  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:03.405942  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:03.406056  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:03.441026  303486 cri.go:89] found id: ""
	I0920 19:08:03.441058  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.441069  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:03.441078  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:03.441147  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:03.477213  303486 cri.go:89] found id: ""
	I0920 19:08:03.477249  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.477261  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:03.477327  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:03.477415  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:03.515843  303486 cri.go:89] found id: ""
	I0920 19:08:03.515880  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.515888  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:03.515895  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:03.515945  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:03.566972  303486 cri.go:89] found id: ""
	I0920 19:08:03.567009  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.567020  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:03.567028  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:03.567097  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:03.616957  303486 cri.go:89] found id: ""
	I0920 19:08:03.617000  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.617013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:03.617029  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:03.617048  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:03.683140  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:03.683192  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:03.697225  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:03.697267  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:03.770430  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:03.770455  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:03.770478  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:03.848796  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:03.848836  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:01.284245  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:03.284437  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:01.964058  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:04.462786  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:03.607506  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.106008  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.387706  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:06.401600  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:06.401669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:06.437854  303486 cri.go:89] found id: ""
	I0920 19:08:06.437890  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.437917  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:06.437926  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:06.437993  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:06.472617  303486 cri.go:89] found id: ""
	I0920 19:08:06.472647  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.472655  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:06.472662  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:06.472718  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:06.510083  303486 cri.go:89] found id: ""
	I0920 19:08:06.510118  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.510131  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:06.510140  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:06.510212  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:06.546388  303486 cri.go:89] found id: ""
	I0920 19:08:06.546418  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.546427  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:06.546434  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:06.546485  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:06.584043  303486 cri.go:89] found id: ""
	I0920 19:08:06.584084  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.584096  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:06.584106  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:06.584182  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:06.622118  303486 cri.go:89] found id: ""
	I0920 19:08:06.622147  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.622155  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:06.622161  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:06.622217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:06.655513  303486 cri.go:89] found id: ""
	I0920 19:08:06.655552  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.655585  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:06.655593  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:06.655657  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:06.690286  303486 cri.go:89] found id: ""
	I0920 19:08:06.690324  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.690336  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:06.690350  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:06.690368  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:06.729229  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:06.729259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:06.780368  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:06.780411  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:06.794746  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:06.794782  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:06.866918  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:06.866944  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:06.866967  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:05.784123  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.284383  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.462855  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.466867  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:10.963736  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.106490  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:10.606291  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:09.451583  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:09.465111  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:09.465178  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:09.497679  303486 cri.go:89] found id: ""
	I0920 19:08:09.497713  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.497725  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:09.497733  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:09.497797  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:09.535297  303486 cri.go:89] found id: ""
	I0920 19:08:09.535334  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.535345  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:09.535353  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:09.535427  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:09.572449  303486 cri.go:89] found id: ""
	I0920 19:08:09.572482  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.572491  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:09.572498  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:09.572608  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:09.612672  303486 cri.go:89] found id: ""
	I0920 19:08:09.612697  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.612705  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:09.612711  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:09.612797  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:09.654366  303486 cri.go:89] found id: ""
	I0920 19:08:09.654399  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.654408  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:09.654415  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:09.654470  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:09.694825  303486 cri.go:89] found id: ""
	I0920 19:08:09.694858  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.694870  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:09.694878  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:09.694942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:09.731618  303486 cri.go:89] found id: ""
	I0920 19:08:09.731682  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.731693  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:09.731702  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:09.731775  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:09.766717  303486 cri.go:89] found id: ""
	I0920 19:08:09.766755  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.766765  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:09.766779  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:09.766794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:09.823505  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:09.823549  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:09.837622  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:09.837658  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:09.919105  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:09.919139  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:09.919156  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:10.000899  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:10.000943  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:12.542974  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:12.557265  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:12.557335  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:12.594099  303486 cri.go:89] found id: ""
	I0920 19:08:12.594126  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.594134  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:12.594140  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:12.594199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:12.627271  303486 cri.go:89] found id: ""
	I0920 19:08:12.627301  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.627308  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:12.627314  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:12.627366  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:12.661225  303486 cri.go:89] found id: ""
	I0920 19:08:12.661256  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.661265  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:12.661272  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:12.661332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:12.701381  303486 cri.go:89] found id: ""
	I0920 19:08:12.701424  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.701437  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:12.701447  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:12.701524  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:12.739189  303486 cri.go:89] found id: ""
	I0920 19:08:12.739227  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.739235  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:12.739246  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:12.739299  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:12.780931  303486 cri.go:89] found id: ""
	I0920 19:08:12.780958  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.781055  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:12.781068  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:12.781124  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:12.818097  303486 cri.go:89] found id: ""
	I0920 19:08:12.818137  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.818150  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:12.818161  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:12.818294  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:12.852925  303486 cri.go:89] found id: ""
	I0920 19:08:12.852957  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.852965  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:12.852975  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:12.852990  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:12.924746  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:12.924774  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:12.924791  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:13.005668  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:13.005718  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:13.044327  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:13.044359  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:13.094788  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:13.094833  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:10.284510  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:12.783546  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:12.964694  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.463615  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:13.105052  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.604922  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.611965  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:15.625857  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:15.625960  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:15.662138  303486 cri.go:89] found id: ""
	I0920 19:08:15.662169  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.662177  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:15.662184  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:15.662261  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:15.696000  303486 cri.go:89] found id: ""
	I0920 19:08:15.696067  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.696100  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:15.696115  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:15.696234  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:15.735594  303486 cri.go:89] found id: ""
	I0920 19:08:15.735625  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.735633  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:15.735640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:15.735699  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:15.774666  303486 cri.go:89] found id: ""
	I0920 19:08:15.774693  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.774703  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:15.774712  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:15.774777  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:15.810754  303486 cri.go:89] found id: ""
	I0920 19:08:15.810799  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.810811  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:15.810820  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:15.810884  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:15.846709  303486 cri.go:89] found id: ""
	I0920 19:08:15.846739  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.846748  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:15.846757  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:15.846819  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:15.880798  303486 cri.go:89] found id: ""
	I0920 19:08:15.880825  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.880833  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:15.880839  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:15.880895  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:15.915119  303486 cri.go:89] found id: ""
	I0920 19:08:15.915150  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.915159  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:15.915170  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:15.915186  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:15.966048  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:15.966087  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:15.979287  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:15.979322  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:16.052129  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:16.052163  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:16.052180  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:16.137743  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:16.137788  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:18.678389  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:18.693073  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:18.693152  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:18.734909  303486 cri.go:89] found id: ""
	I0920 19:08:18.734943  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.734954  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:18.734962  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:18.735028  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:18.773472  303486 cri.go:89] found id: ""
	I0920 19:08:18.773506  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.773517  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:18.773525  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:18.773620  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:18.812184  303486 cri.go:89] found id: ""
	I0920 19:08:18.812218  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.812228  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:18.812236  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:18.812305  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:18.846569  303486 cri.go:89] found id: ""
	I0920 19:08:18.846608  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.846619  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:18.846627  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:18.846700  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:18.881794  303486 cri.go:89] found id: ""
	I0920 19:08:18.881836  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.881862  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:18.881870  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:18.881943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:18.919657  303486 cri.go:89] found id: ""
	I0920 19:08:18.919688  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.919698  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:18.919708  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:18.919774  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:14.784734  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:17.283590  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:19.284056  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:17.962913  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:20.462190  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:18.105736  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:20.106314  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:22.605231  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:18.955117  303486 cri.go:89] found id: ""
	I0920 19:08:18.955146  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.955157  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:18.955166  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:18.955243  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:18.992389  303486 cri.go:89] found id: ""
	I0920 19:08:18.992422  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.992430  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:18.992444  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:18.992460  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:19.070374  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:19.070417  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:19.110793  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:19.110825  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:19.163783  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:19.163830  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:19.177348  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:19.177387  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:19.249469  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:21.749644  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:21.764920  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:21.765006  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:21.803443  303486 cri.go:89] found id: ""
	I0920 19:08:21.803473  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.803481  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:21.803489  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:21.803545  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:21.844552  303486 cri.go:89] found id: ""
	I0920 19:08:21.844582  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.844593  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:21.844601  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:21.844672  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:21.878979  303486 cri.go:89] found id: ""
	I0920 19:08:21.879007  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.879017  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:21.879029  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:21.879099  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:21.915745  303486 cri.go:89] found id: ""
	I0920 19:08:21.915773  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.915783  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:21.915794  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:21.915865  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:21.948999  303486 cri.go:89] found id: ""
	I0920 19:08:21.949031  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.949043  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:21.949052  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:21.949118  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:21.984238  303486 cri.go:89] found id: ""
	I0920 19:08:21.984269  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.984277  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:21.984284  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:21.984357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:22.018581  303486 cri.go:89] found id: ""
	I0920 19:08:22.018610  303486 logs.go:276] 0 containers: []
	W0920 19:08:22.018620  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:22.018628  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:22.018694  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:22.051868  303486 cri.go:89] found id: ""
	I0920 19:08:22.051903  303486 logs.go:276] 0 containers: []
	W0920 19:08:22.051913  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:22.051925  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:22.051942  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:22.106711  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:22.106756  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:22.120910  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:22.120940  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:22.196564  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:22.196591  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:22.196608  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:22.275235  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:22.275288  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:21.785129  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.284359  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:22.463122  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.962694  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:25.105050  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:27.105237  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.821956  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:24.836846  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:24.836918  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:24.878371  303486 cri.go:89] found id: ""
	I0920 19:08:24.878398  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.878406  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:24.878413  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:24.878464  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:24.911450  303486 cri.go:89] found id: ""
	I0920 19:08:24.911480  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.911489  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:24.911497  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:24.911590  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:24.949248  303486 cri.go:89] found id: ""
	I0920 19:08:24.949281  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.949289  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:24.949298  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:24.949352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:24.987899  303486 cri.go:89] found id: ""
	I0920 19:08:24.987932  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.987939  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:24.987948  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:24.988011  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:25.020589  303486 cri.go:89] found id: ""
	I0920 19:08:25.020627  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.020638  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:25.020646  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:25.020701  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:25.060223  303486 cri.go:89] found id: ""
	I0920 19:08:25.060250  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.060258  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:25.060266  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:25.060331  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:25.099111  303486 cri.go:89] found id: ""
	I0920 19:08:25.099141  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.099151  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:25.099160  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:25.099242  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:25.136055  303486 cri.go:89] found id: ""
	I0920 19:08:25.136089  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.136098  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:25.136118  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:25.136135  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:25.187619  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:25.187658  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:25.200983  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:25.201016  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:25.270746  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:25.270778  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:25.270795  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:25.350009  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:25.350050  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:27.889864  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:27.903156  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:27.903231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:27.935087  303486 cri.go:89] found id: ""
	I0920 19:08:27.935118  303486 logs.go:276] 0 containers: []
	W0920 19:08:27.935128  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:27.935138  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:27.935199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:27.970451  303486 cri.go:89] found id: ""
	I0920 19:08:27.970479  303486 logs.go:276] 0 containers: []
	W0920 19:08:27.970487  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:27.970494  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:27.970545  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:28.004931  303486 cri.go:89] found id: ""
	I0920 19:08:28.004980  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.004992  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:28.005002  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:28.005068  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:28.039438  303486 cri.go:89] found id: ""
	I0920 19:08:28.039470  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.039478  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:28.039485  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:28.039535  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:28.076023  303486 cri.go:89] found id: ""
	I0920 19:08:28.076050  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.076058  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:28.076064  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:28.076131  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:28.114726  303486 cri.go:89] found id: ""
	I0920 19:08:28.114761  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.114772  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:28.114781  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:28.114846  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:28.150790  303486 cri.go:89] found id: ""
	I0920 19:08:28.150822  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.150832  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:28.150841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:28.150908  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:28.186576  303486 cri.go:89] found id: ""
	I0920 19:08:28.186606  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.186614  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:28.186626  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:28.186648  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:28.240939  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:28.240984  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:28.255267  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:28.255304  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:28.327773  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:28.327797  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:28.327809  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:28.418011  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:28.418055  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:26.785099  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:29.284297  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:26.962825  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:28.963261  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:30.963575  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:29.605453  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:32.104848  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:30.962398  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:30.975385  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:30.975471  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:31.009898  303486 cri.go:89] found id: ""
	I0920 19:08:31.009952  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.009964  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:31.009973  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:31.010044  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:31.043639  303486 cri.go:89] found id: ""
	I0920 19:08:31.043670  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.043679  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:31.043689  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:31.043758  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:31.077709  303486 cri.go:89] found id: ""
	I0920 19:08:31.077745  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.077753  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:31.077759  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:31.077818  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:31.111117  303486 cri.go:89] found id: ""
	I0920 19:08:31.111150  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.111160  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:31.111168  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:31.111234  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:31.143888  303486 cri.go:89] found id: ""
	I0920 19:08:31.143921  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.143933  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:31.143942  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:31.144014  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:31.176694  303486 cri.go:89] found id: ""
	I0920 19:08:31.176729  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.176742  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:31.176751  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:31.176815  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:31.213794  303486 cri.go:89] found id: ""
	I0920 19:08:31.213832  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.213844  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:31.213854  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:31.213946  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:31.250160  303486 cri.go:89] found id: ""
	I0920 19:08:31.250219  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.250230  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:31.250244  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:31.250261  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:31.263748  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:31.263784  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:31.337719  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:31.337749  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:31.337762  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:31.420398  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:31.420446  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:31.459992  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:31.460030  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:31.284818  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:33.783288  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:33.462900  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:35.463122  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:34.105758  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:36.604917  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:34.014229  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:34.028129  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:34.028194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:34.060793  303486 cri.go:89] found id: ""
	I0920 19:08:34.060832  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.060850  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:34.060859  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:34.060919  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:34.094440  303486 cri.go:89] found id: ""
	I0920 19:08:34.094467  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.094475  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:34.094481  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:34.094544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:34.128824  303486 cri.go:89] found id: ""
	I0920 19:08:34.128861  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.128872  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:34.128881  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:34.128948  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:34.160861  303486 cri.go:89] found id: ""
	I0920 19:08:34.160894  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.160903  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:34.160911  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:34.160967  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:34.196897  303486 cri.go:89] found id: ""
	I0920 19:08:34.196933  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.196952  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:34.196958  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:34.197020  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:34.229083  303486 cri.go:89] found id: ""
	I0920 19:08:34.229115  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.229125  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:34.229134  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:34.229205  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:34.261877  303486 cri.go:89] found id: ""
	I0920 19:08:34.261922  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.261933  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:34.261941  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:34.262008  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:34.296145  303486 cri.go:89] found id: ""
	I0920 19:08:34.296177  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.296189  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:34.296199  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:34.296214  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:34.361598  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:34.361624  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:34.361641  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:34.441067  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:34.441110  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:34.483333  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:34.483362  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:34.538345  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:34.538388  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:37.053155  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:37.067157  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:37.067230  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:37.101432  303486 cri.go:89] found id: ""
	I0920 19:08:37.101466  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.101476  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:37.101485  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:37.101550  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:37.134375  303486 cri.go:89] found id: ""
	I0920 19:08:37.134408  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.134416  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:37.134423  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:37.134487  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:37.167049  303486 cri.go:89] found id: ""
	I0920 19:08:37.167087  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.167099  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:37.167107  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:37.167175  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:37.209358  303486 cri.go:89] found id: ""
	I0920 19:08:37.209387  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.209397  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:37.209405  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:37.209470  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:37.243227  303486 cri.go:89] found id: ""
	I0920 19:08:37.243261  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.243272  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:37.243281  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:37.243332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:37.276546  303486 cri.go:89] found id: ""
	I0920 19:08:37.276596  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.276607  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:37.276626  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:37.276688  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:37.311233  303486 cri.go:89] found id: ""
	I0920 19:08:37.311268  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.311279  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:37.311287  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:37.311352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:37.349970  303486 cri.go:89] found id: ""
	I0920 19:08:37.350003  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.350013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:37.350025  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:37.350041  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:37.399405  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:37.399445  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:37.423764  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:37.423800  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:37.498797  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:37.498826  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:37.498841  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:37.575521  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:37.575566  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:35.783897  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:37.784496  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:37.463224  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:39.463445  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:38.605444  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:40.606712  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:40.118650  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:40.131967  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:40.132051  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:40.165313  303486 cri.go:89] found id: ""
	I0920 19:08:40.165349  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.165358  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:40.165366  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:40.165439  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:40.197194  303486 cri.go:89] found id: ""
	I0920 19:08:40.197223  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.197232  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:40.197238  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:40.197289  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:40.236769  303486 cri.go:89] found id: ""
	I0920 19:08:40.236800  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.236810  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:40.236819  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:40.236888  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:40.271960  303486 cri.go:89] found id: ""
	I0920 19:08:40.271984  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.271992  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:40.271998  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:40.272049  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:40.307874  303486 cri.go:89] found id: ""
	I0920 19:08:40.307909  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.307917  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:40.307923  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:40.307982  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:40.342128  303486 cri.go:89] found id: ""
	I0920 19:08:40.342160  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.342168  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:40.342175  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:40.342233  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:40.381493  303486 cri.go:89] found id: ""
	I0920 19:08:40.381529  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.381542  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:40.381551  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:40.381617  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:40.415164  303486 cri.go:89] found id: ""
	I0920 19:08:40.415199  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.415211  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:40.415222  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:40.415238  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:40.488306  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:40.488330  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:40.488350  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:40.567193  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:40.567235  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:40.607256  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:40.607287  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:40.659504  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:40.659542  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:43.174043  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:43.188690  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:43.188790  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:43.227223  303486 cri.go:89] found id: ""
	I0920 19:08:43.227251  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.227259  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:43.227267  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:43.227356  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:43.260099  303486 cri.go:89] found id: ""
	I0920 19:08:43.260128  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.260137  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:43.260143  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:43.260195  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:43.297846  303486 cri.go:89] found id: ""
	I0920 19:08:43.297875  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.297886  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:43.297894  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:43.297980  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:43.334026  303486 cri.go:89] found id: ""
	I0920 19:08:43.334061  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.334070  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:43.334078  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:43.334147  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:43.367765  303486 cri.go:89] found id: ""
	I0920 19:08:43.367795  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.367806  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:43.367814  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:43.367890  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:43.402722  303486 cri.go:89] found id: ""
	I0920 19:08:43.402766  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.402778  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:43.402787  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:43.402852  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:43.439643  303486 cri.go:89] found id: ""
	I0920 19:08:43.439674  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.439682  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:43.439690  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:43.439742  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:43.475931  303486 cri.go:89] found id: ""
	I0920 19:08:43.475965  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.475976  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:43.475991  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:43.476006  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:43.545694  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:43.545725  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:43.545739  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:43.627493  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:43.627549  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:43.667758  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:43.667794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:43.721803  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:43.721851  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:40.285524  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:42.784336  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:41.962300  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:43.963712  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:45.963766  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:43.105271  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:45.105737  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:47.604667  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:46.237499  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:46.250854  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:46.250925  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:46.288918  303486 cri.go:89] found id: ""
	I0920 19:08:46.288950  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.288957  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:46.288964  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:46.289026  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:46.321113  303486 cri.go:89] found id: ""
	I0920 19:08:46.321149  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.321159  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:46.321168  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:46.321239  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:46.359606  303486 cri.go:89] found id: ""
	I0920 19:08:46.359643  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.359652  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:46.359659  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:46.359729  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:46.397059  303486 cri.go:89] found id: ""
	I0920 19:08:46.397089  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.397098  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:46.397104  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:46.397174  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:46.438224  303486 cri.go:89] found id: ""
	I0920 19:08:46.438261  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.438271  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:46.438279  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:46.438355  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:46.476933  303486 cri.go:89] found id: ""
	I0920 19:08:46.476963  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.476973  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:46.476981  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:46.477047  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:46.522115  303486 cri.go:89] found id: ""
	I0920 19:08:46.522150  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.522160  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:46.522167  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:46.522236  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:46.555508  303486 cri.go:89] found id: ""
	I0920 19:08:46.555541  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.555551  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:46.555565  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:46.555580  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:46.632314  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:46.632358  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:46.672381  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:46.672420  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:46.725777  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:46.725835  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:46.739924  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:46.739959  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:46.816667  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:45.284171  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:47.284420  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.284798  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:48.462088  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:50.463100  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.606279  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:52.105103  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.317620  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:49.331792  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:49.331872  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:49.365417  303486 cri.go:89] found id: ""
	I0920 19:08:49.365457  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.365470  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:49.365479  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:49.365543  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:49.399422  303486 cri.go:89] found id: ""
	I0920 19:08:49.399455  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.399465  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:49.399474  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:49.399532  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:49.433040  303486 cri.go:89] found id: ""
	I0920 19:08:49.433069  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.433076  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:49.433082  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:49.433149  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:49.466865  303486 cri.go:89] found id: ""
	I0920 19:08:49.466897  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.466909  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:49.466917  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:49.466986  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:49.499542  303486 cri.go:89] found id: ""
	I0920 19:08:49.499574  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.499583  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:49.499589  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:49.499639  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:49.534310  303486 cri.go:89] found id: ""
	I0920 19:08:49.534338  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.534346  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:49.534353  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:49.534411  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:49.580271  303486 cri.go:89] found id: ""
	I0920 19:08:49.580297  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.580305  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:49.580312  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:49.580385  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:49.626519  303486 cri.go:89] found id: ""
	I0920 19:08:49.626554  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.626562  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:49.626572  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:49.626587  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:49.682923  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:49.682963  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:49.695859  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:49.695895  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:49.767626  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:49.767669  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:49.767697  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:49.849570  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:49.849614  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:52.387653  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:52.400693  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:52.400757  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:52.434320  303486 cri.go:89] found id: ""
	I0920 19:08:52.434358  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.434369  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:52.434381  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:52.434448  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:52.469167  303486 cri.go:89] found id: ""
	I0920 19:08:52.469202  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.469214  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:52.469222  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:52.469291  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:52.504241  303486 cri.go:89] found id: ""
	I0920 19:08:52.504287  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.504295  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:52.504304  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:52.504367  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:52.539573  303486 cri.go:89] found id: ""
	I0920 19:08:52.539604  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.539613  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:52.539619  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:52.539697  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:52.573794  303486 cri.go:89] found id: ""
	I0920 19:08:52.573821  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.573829  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:52.573834  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:52.573931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:52.607628  303486 cri.go:89] found id: ""
	I0920 19:08:52.607660  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.607670  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:52.607676  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:52.607738  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:52.639088  303486 cri.go:89] found id: ""
	I0920 19:08:52.639121  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.639132  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:52.639140  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:52.639204  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:52.673585  303486 cri.go:89] found id: ""
	I0920 19:08:52.673624  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.673636  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:52.673650  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:52.673667  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:52.726463  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:52.726504  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:52.739520  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:52.739553  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:52.820610  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:52.820638  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:52.820653  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:52.898567  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:52.898612  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:51.783687  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:53.784963  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:52.962326  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:54.963069  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:54.105159  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:56.604367  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:55.440875  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:55.454526  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:55.454602  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:55.490616  303486 cri.go:89] found id: ""
	I0920 19:08:55.490655  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.490664  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:55.490671  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:55.490735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:55.530256  303486 cri.go:89] found id: ""
	I0920 19:08:55.530287  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.530296  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:55.530304  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:55.530357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:55.565209  303486 cri.go:89] found id: ""
	I0920 19:08:55.565242  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.565253  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:55.565260  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:55.565319  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:55.599522  303486 cri.go:89] found id: ""
	I0920 19:08:55.599553  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.599563  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:55.599571  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:55.599634  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:55.634662  303486 cri.go:89] found id: ""
	I0920 19:08:55.634692  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.634700  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:55.634707  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:55.634759  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:55.670326  303486 cri.go:89] found id: ""
	I0920 19:08:55.670361  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.670372  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:55.670379  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:55.670434  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:55.702589  303486 cri.go:89] found id: ""
	I0920 19:08:55.702617  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.702625  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:55.702632  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:55.702694  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:55.737615  303486 cri.go:89] found id: ""
	I0920 19:08:55.737643  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.737653  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:55.737667  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:55.737682  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:55.816827  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:55.816873  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:55.855521  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:55.855550  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:55.905002  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:55.905047  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:55.918292  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:55.918324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:55.987445  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:58.488566  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:58.503898  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:58.504001  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:58.539089  303486 cri.go:89] found id: ""
	I0920 19:08:58.539117  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.539127  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:58.539135  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:58.539199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:58.576432  303486 cri.go:89] found id: ""
	I0920 19:08:58.576459  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.576467  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:58.576473  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:58.576542  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:58.613779  303486 cri.go:89] found id: ""
	I0920 19:08:58.613814  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.613825  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:58.613833  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:58.613932  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:58.648717  303486 cri.go:89] found id: ""
	I0920 19:08:58.648757  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.648768  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:58.648777  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:58.648845  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:58.681533  303486 cri.go:89] found id: ""
	I0920 19:08:58.681568  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.681585  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:58.681593  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:58.681647  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:58.714833  303486 cri.go:89] found id: ""
	I0920 19:08:58.714867  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.714877  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:58.714886  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:58.714951  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:58.755939  303486 cri.go:89] found id: ""
	I0920 19:08:58.755972  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.755980  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:58.755986  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:58.756037  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:58.793195  303486 cri.go:89] found id: ""
	I0920 19:08:58.793229  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.793240  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:58.793252  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:58.793267  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:58.807903  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:58.807939  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:58.873993  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:58.874022  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:58.874042  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:56.283846  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.286474  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:56.963398  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.963513  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.606087  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:01.106199  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.955201  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:58.955249  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:58.994230  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:58.994265  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:01.548403  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:01.561467  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:01.561541  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:01.595339  303486 cri.go:89] found id: ""
	I0920 19:09:01.595374  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.595382  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:01.595388  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:01.595463  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:01.631995  303486 cri.go:89] found id: ""
	I0920 19:09:01.632033  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.632043  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:01.632051  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:01.632119  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:01.667556  303486 cri.go:89] found id: ""
	I0920 19:09:01.667586  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.667596  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:01.667604  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:01.667669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:01.702678  303486 cri.go:89] found id: ""
	I0920 19:09:01.702708  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.702716  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:01.702723  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:01.702786  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:01.739953  303486 cri.go:89] found id: ""
	I0920 19:09:01.739987  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.739999  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:01.740008  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:01.740075  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:01.774188  303486 cri.go:89] found id: ""
	I0920 19:09:01.774222  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.774239  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:01.774249  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:01.774317  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:01.808885  303486 cri.go:89] found id: ""
	I0920 19:09:01.808916  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.808927  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:01.808935  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:01.808997  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:01.842357  303486 cri.go:89] found id: ""
	I0920 19:09:01.842394  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.842404  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:01.842417  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:01.842433  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:01.881750  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:01.881782  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:01.932190  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:01.932236  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:01.946305  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:01.946337  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:02.020099  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:02.020127  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:02.020141  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:00.784428  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.284109  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:01.462613  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.962360  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:05.963735  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.605623  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:06.104994  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:04.601186  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:04.614292  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:04.614374  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:04.649579  303486 cri.go:89] found id: ""
	I0920 19:09:04.649611  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.649619  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:04.649625  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:04.649683  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:04.684039  303486 cri.go:89] found id: ""
	I0920 19:09:04.684076  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.684094  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:04.684108  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:04.684182  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:04.729130  303486 cri.go:89] found id: ""
	I0920 19:09:04.729166  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.729177  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:04.729186  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:04.729244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:04.762646  303486 cri.go:89] found id: ""
	I0920 19:09:04.762682  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.762690  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:04.762697  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:04.762761  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:04.797492  303486 cri.go:89] found id: ""
	I0920 19:09:04.797518  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.797527  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:04.797533  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:04.797588  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:04.832780  303486 cri.go:89] found id: ""
	I0920 19:09:04.832813  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.832823  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:04.832831  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:04.832893  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:04.868489  303486 cri.go:89] found id: ""
	I0920 19:09:04.868526  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.868537  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:04.868546  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:04.868613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:04.901115  303486 cri.go:89] found id: ""
	I0920 19:09:04.901156  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.901164  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:04.901174  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:04.901186  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:04.952435  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:04.952482  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:04.966450  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:04.966481  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:05.035951  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:05.035977  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:05.035991  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:05.120961  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:05.121016  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:07.659497  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:07.672989  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:07.673062  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:07.708200  303486 cri.go:89] found id: ""
	I0920 19:09:07.708236  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.708247  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:07.708256  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:07.708320  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:07.742116  303486 cri.go:89] found id: ""
	I0920 19:09:07.742156  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.742166  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:07.742175  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:07.742231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:07.774369  303486 cri.go:89] found id: ""
	I0920 19:09:07.774401  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.774410  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:07.774419  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:07.774485  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:07.811727  303486 cri.go:89] found id: ""
	I0920 19:09:07.811756  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.811763  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:07.811769  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:07.811825  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:07.849613  303486 cri.go:89] found id: ""
	I0920 19:09:07.849646  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.849655  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:07.849661  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:07.849715  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:07.884643  303486 cri.go:89] found id: ""
	I0920 19:09:07.884679  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.884690  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:07.884698  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:07.884770  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:07.920240  303486 cri.go:89] found id: ""
	I0920 19:09:07.920272  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.920283  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:07.920292  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:07.920371  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:07.954729  303486 cri.go:89] found id: ""
	I0920 19:09:07.954768  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.954780  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:07.954792  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:07.954808  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:08.008679  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:08.008732  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:08.023637  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:08.023673  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:08.097298  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:08.097325  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:08.097340  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:08.173404  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:08.173444  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:05.783765  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.283642  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.462994  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.965062  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.106350  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.605138  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:12.605390  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.718224  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:10.732520  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:10.732593  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:10.766764  303486 cri.go:89] found id: ""
	I0920 19:09:10.766800  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.766811  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:10.766821  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:10.766887  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:10.800039  303486 cri.go:89] found id: ""
	I0920 19:09:10.800077  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.800087  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:10.800095  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:10.800157  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:10.833931  303486 cri.go:89] found id: ""
	I0920 19:09:10.833969  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.833979  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:10.833985  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:10.834057  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:10.867714  303486 cri.go:89] found id: ""
	I0920 19:09:10.867752  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.867763  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:10.867771  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:10.867840  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:10.903026  303486 cri.go:89] found id: ""
	I0920 19:09:10.903060  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.903068  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:10.903075  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:10.903131  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:10.936968  303486 cri.go:89] found id: ""
	I0920 19:09:10.937002  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.937013  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:10.937021  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:10.937089  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:10.973055  303486 cri.go:89] found id: ""
	I0920 19:09:10.973079  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.973087  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:10.973093  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:10.973145  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:11.010283  303486 cri.go:89] found id: ""
	I0920 19:09:11.010310  303486 logs.go:276] 0 containers: []
	W0920 19:09:11.010321  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:11.010333  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:11.010352  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:11.025202  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:11.025239  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:11.104268  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:11.104295  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:11.104312  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:11.182281  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:11.182326  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:11.219296  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:11.219335  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:13.767833  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:13.780805  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:13.780890  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:13.822288  303486 cri.go:89] found id: ""
	I0920 19:09:13.822317  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.822327  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:13.822334  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:13.822388  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:13.862068  303486 cri.go:89] found id: ""
	I0920 19:09:13.862098  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.862106  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:13.862112  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:13.862163  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:13.898497  303486 cri.go:89] found id: ""
	I0920 19:09:13.898529  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.898540  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:13.898550  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:13.898618  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:13.935994  303486 cri.go:89] found id: ""
	I0920 19:09:13.936022  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.936030  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:13.936038  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:13.936105  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:10.277863  302869 pod_ready.go:82] duration metric: took 4m0.000569658s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" ...
	E0920 19:09:10.277919  302869 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 19:09:10.277965  302869 pod_ready.go:39] duration metric: took 4m13.052343801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:10.278003  302869 kubeadm.go:597] duration metric: took 4m21.10965758s to restartPrimaryControlPlane
	W0920 19:09:10.278125  302869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:09:10.278168  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:09:13.462752  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:15.962371  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:14.605565  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:17.112026  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:13.973764  303486 cri.go:89] found id: ""
	I0920 19:09:13.973801  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.973812  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:13.973820  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:13.973898  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:14.009443  303486 cri.go:89] found id: ""
	I0920 19:09:14.009482  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.009494  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:14.009502  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:14.009577  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:14.045593  303486 cri.go:89] found id: ""
	I0920 19:09:14.045629  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.045639  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:14.045648  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:14.045714  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:14.086273  303486 cri.go:89] found id: ""
	I0920 19:09:14.086310  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.086319  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:14.086330  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:14.086343  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:14.140730  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:14.140772  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:14.154198  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:14.154232  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:14.224716  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:14.224739  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:14.224754  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:14.302625  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:14.302665  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:16.840816  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:16.854905  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:16.855002  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:16.892994  303486 cri.go:89] found id: ""
	I0920 19:09:16.893028  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.893038  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:16.893045  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:16.893103  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:16.931265  303486 cri.go:89] found id: ""
	I0920 19:09:16.931293  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.931307  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:16.931313  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:16.931364  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:16.970085  303486 cri.go:89] found id: ""
	I0920 19:09:16.970119  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.970129  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:16.970138  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:16.970189  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:17.003163  303486 cri.go:89] found id: ""
	I0920 19:09:17.003194  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.003206  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:17.003214  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:17.003282  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:17.040577  303486 cri.go:89] found id: ""
	I0920 19:09:17.040618  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.040633  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:17.040640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:17.040706  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:17.073946  303486 cri.go:89] found id: ""
	I0920 19:09:17.073986  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.073995  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:17.074006  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:17.074066  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:17.111569  303486 cri.go:89] found id: ""
	I0920 19:09:17.111636  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.111648  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:17.111664  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:17.111730  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:17.148005  303486 cri.go:89] found id: ""
	I0920 19:09:17.148034  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.148044  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:17.148056  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:17.148072  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:17.222281  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:17.222306  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:17.222324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:17.297577  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:17.297619  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:17.334709  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:17.334740  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:17.386279  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:17.386320  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:17.962802  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.963289  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.605813  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:22.105024  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.901017  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:19.914489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:19.914571  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:19.955023  303486 cri.go:89] found id: ""
	I0920 19:09:19.955051  303486 logs.go:276] 0 containers: []
	W0920 19:09:19.955060  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:19.955067  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:19.955125  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:19.995536  303486 cri.go:89] found id: ""
	I0920 19:09:19.995575  303486 logs.go:276] 0 containers: []
	W0920 19:09:19.995585  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:19.995594  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:19.995650  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:20.031153  303486 cri.go:89] found id: ""
	I0920 19:09:20.031181  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.031190  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:20.031198  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:20.031266  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:20.064145  303486 cri.go:89] found id: ""
	I0920 19:09:20.064174  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.064190  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:20.064199  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:20.064256  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:20.098399  303486 cri.go:89] found id: ""
	I0920 19:09:20.098429  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.098440  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:20.098449  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:20.098505  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:20.138805  303486 cri.go:89] found id: ""
	I0920 19:09:20.138833  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.138843  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:20.138852  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:20.138914  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:20.183291  303486 cri.go:89] found id: ""
	I0920 19:09:20.183322  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.183333  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:20.183342  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:20.183406  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:20.220344  303486 cri.go:89] found id: ""
	I0920 19:09:20.220378  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.220396  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:20.220409  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:20.220426  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:20.271043  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:20.271086  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:20.286724  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:20.286754  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:20.358233  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:20.358273  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:20.358291  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:20.439511  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:20.439568  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:22.982570  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:22.995384  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:22.995475  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:23.029031  303486 cri.go:89] found id: ""
	I0920 19:09:23.029069  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.029081  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:23.029091  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:23.029166  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:23.063291  303486 cri.go:89] found id: ""
	I0920 19:09:23.063325  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.063336  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:23.063343  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:23.063413  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:23.097494  303486 cri.go:89] found id: ""
	I0920 19:09:23.097525  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.097536  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:23.097545  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:23.097610  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:23.132169  303486 cri.go:89] found id: ""
	I0920 19:09:23.132197  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.132204  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:23.132211  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:23.132276  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:23.173651  303486 cri.go:89] found id: ""
	I0920 19:09:23.173682  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.173692  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:23.173700  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:23.173763  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:23.206098  303486 cri.go:89] found id: ""
	I0920 19:09:23.206135  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.206146  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:23.206155  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:23.206216  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:23.245422  303486 cri.go:89] found id: ""
	I0920 19:09:23.245466  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.245479  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:23.245489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:23.245569  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:23.280326  303486 cri.go:89] found id: ""
	I0920 19:09:23.280357  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.280365  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:23.280376  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:23.280390  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:23.330986  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:23.331034  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:23.344751  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:23.344788  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:23.420213  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:23.420239  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:23.420255  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:23.500449  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:23.500491  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:22.462590  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:24.962516  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:24.105502  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:26.110930  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:26.040050  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:26.056377  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:26.056463  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:26.094122  303486 cri.go:89] found id: ""
	I0920 19:09:26.094160  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.094170  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:26.094179  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:26.094246  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:26.129383  303486 cri.go:89] found id: ""
	I0920 19:09:26.129408  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.129415  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:26.129422  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:26.129472  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:26.163579  303486 cri.go:89] found id: ""
	I0920 19:09:26.163611  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.163621  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:26.163630  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:26.163699  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:26.208026  303486 cri.go:89] found id: ""
	I0920 19:09:26.208057  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.208065  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:26.208071  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:26.208138  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:26.245375  303486 cri.go:89] found id: ""
	I0920 19:09:26.245409  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.245421  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:26.245438  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:26.245500  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:26.280283  303486 cri.go:89] found id: ""
	I0920 19:09:26.280315  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.280326  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:26.280336  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:26.280397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:26.314621  303486 cri.go:89] found id: ""
	I0920 19:09:26.314657  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.314670  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:26.314679  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:26.314773  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:26.347667  303486 cri.go:89] found id: ""
	I0920 19:09:26.347694  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.347701  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:26.347711  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:26.347722  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:26.397221  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:26.397259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:26.411126  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:26.411157  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:26.479631  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:26.479657  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:26.479686  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:26.555439  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:26.555477  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:26.962845  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:28.963560  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:28.605949  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:30.612349  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:32.104187  303063 pod_ready.go:82] duration metric: took 4m0.005608637s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	E0920 19:09:32.104213  303063 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 19:09:32.104224  303063 pod_ready.go:39] duration metric: took 4m5.679030104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:32.104241  303063 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:09:32.104273  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:32.104327  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:32.151755  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:32.151778  303063 cri.go:89] found id: ""
	I0920 19:09:32.151787  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:32.151866  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.157358  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:32.157426  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:32.201227  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:32.201255  303063 cri.go:89] found id: ""
	I0920 19:09:32.201263  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:32.201327  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.206508  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:32.206604  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:32.243509  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:32.243533  303063 cri.go:89] found id: ""
	I0920 19:09:32.243542  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:32.243595  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.247764  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:32.247836  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:32.283590  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:32.283627  303063 cri.go:89] found id: ""
	I0920 19:09:32.283637  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:32.283727  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.287826  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:32.287893  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:32.329071  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:32.329111  303063 cri.go:89] found id: ""
	I0920 19:09:32.329123  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:32.329196  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.333152  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:32.333236  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:32.372444  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:32.372474  303063 cri.go:89] found id: ""
	I0920 19:09:32.372485  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:32.372548  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.376414  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:32.376494  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:32.412244  303063 cri.go:89] found id: ""
	I0920 19:09:32.412280  303063 logs.go:276] 0 containers: []
	W0920 19:09:32.412291  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:32.412299  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:32.412352  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:32.449451  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:32.449472  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:32.449477  303063 cri.go:89] found id: ""
	I0920 19:09:32.449485  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:32.449544  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.454960  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.459688  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:32.459720  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:32.599208  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:32.599241  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:32.656960  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:32.657000  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:32.703259  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:32.703308  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:32.769218  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:32.769260  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:29.096877  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:29.110081  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:29.110170  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:29.152570  303486 cri.go:89] found id: ""
	I0920 19:09:29.152598  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.152608  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:29.152616  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:29.152689  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:29.188596  303486 cri.go:89] found id: ""
	I0920 19:09:29.188627  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.188638  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:29.188645  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:29.188713  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:29.228789  303486 cri.go:89] found id: ""
	I0920 19:09:29.228831  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.228841  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:29.228850  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:29.228913  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:29.260013  303486 cri.go:89] found id: ""
	I0920 19:09:29.260040  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.260048  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:29.260054  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:29.260105  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:29.293373  303486 cri.go:89] found id: ""
	I0920 19:09:29.293401  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.293411  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:29.293418  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:29.293487  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:29.325860  303486 cri.go:89] found id: ""
	I0920 19:09:29.325898  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.325925  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:29.325935  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:29.326027  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:29.358873  303486 cri.go:89] found id: ""
	I0920 19:09:29.358909  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.358921  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:29.358930  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:29.358994  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:29.392029  303486 cri.go:89] found id: ""
	I0920 19:09:29.392057  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.392067  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:29.392080  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:29.392095  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:29.467460  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:29.467504  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:29.508258  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:29.508298  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:29.559238  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:29.559274  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:29.574233  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:29.574264  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:29.649318  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:32.150539  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:32.168442  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:32.168527  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:32.210069  303486 cri.go:89] found id: ""
	I0920 19:09:32.210103  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.210120  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:32.210129  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:32.210191  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:32.243468  303486 cri.go:89] found id: ""
	I0920 19:09:32.243501  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.243511  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:32.243519  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:32.243586  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:32.275958  303486 cri.go:89] found id: ""
	I0920 19:09:32.275988  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.275996  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:32.276003  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:32.276056  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:32.312560  303486 cri.go:89] found id: ""
	I0920 19:09:32.312598  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.312609  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:32.312620  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:32.312695  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:32.347157  303486 cri.go:89] found id: ""
	I0920 19:09:32.347185  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.347193  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:32.347200  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:32.347264  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:32.382787  303486 cri.go:89] found id: ""
	I0920 19:09:32.382820  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.382832  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:32.382841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:32.382898  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:32.416182  303486 cri.go:89] found id: ""
	I0920 19:09:32.416216  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.416226  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:32.416234  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:32.416297  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:32.448863  303486 cri.go:89] found id: ""
	I0920 19:09:32.448895  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.448906  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:32.448919  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:32.448934  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:32.501882  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:32.501934  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:32.517984  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:32.518014  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:32.588517  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:32.588547  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:32.588560  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:32.671869  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:32.671921  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:35.211780  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:35.225476  303486 kubeadm.go:597] duration metric: took 4m2.827297435s to restartPrimaryControlPlane
	W0920 19:09:35.225582  303486 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:09:35.225618  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:09:35.686956  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:35.701803  303486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:09:35.712572  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:09:35.722867  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:09:35.722894  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:09:35.722948  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:09:35.732295  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:09:35.732358  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:09:35.741569  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:09:35.750515  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:09:35.750577  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:09:35.760469  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:09:35.770207  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:09:35.770284  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:09:35.780121  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:09:35.789887  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:09:35.789974  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:09:35.800914  303486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:09:35.871635  303486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:09:35.871691  303486 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:09:36.021411  303486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:09:36.021565  303486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:09:36.021773  303486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:09:36.217540  303486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:09:31.462557  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:33.463284  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:35.964501  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:36.723149  302869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.444941441s)
	I0920 19:09:36.723244  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:36.740763  302869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:09:36.751727  302869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:09:36.762710  302869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:09:36.762736  302869 kubeadm.go:157] found existing configuration files:
	
	I0920 19:09:36.762793  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:09:36.773454  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:09:36.773536  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:09:36.784738  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:09:36.794740  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:09:36.794818  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:09:36.805727  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:09:36.818253  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:09:36.818329  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:09:36.831210  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:09:36.842838  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:09:36.842914  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:09:36.853306  302869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:09:36.903121  302869 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:09:36.903285  302869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:09:37.025789  302869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:09:37.025969  302869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:09:37.026110  302869 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:09:37.034613  302869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:09:36.219542  303486 out.go:235]   - Generating certificates and keys ...
	I0920 19:09:36.219684  303486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:09:36.219769  303486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:09:36.219892  303486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:09:36.219973  303486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:09:36.220090  303486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:09:36.220181  303486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:09:36.220302  303486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:09:36.220414  303486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:09:36.220530  303486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:09:36.220626  303486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:09:36.220691  303486 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:09:36.220767  303486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:09:36.377012  303486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:09:36.706154  303486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:09:36.907341  303486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:09:37.091990  303486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:09:37.122813  303486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:09:37.124422  303486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:09:37.124531  303486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:09:37.277461  303486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:09:33.294289  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:33.294346  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:33.362317  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:33.362364  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:33.375712  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:33.375747  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:33.411136  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:33.411168  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:33.445649  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:33.445690  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:33.478869  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:33.478898  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:33.529433  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:33.529480  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:33.570515  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:33.570560  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.107490  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:36.124979  303063 api_server.go:72] duration metric: took 4m17.429642296s to wait for apiserver process to appear ...
	I0920 19:09:36.125014  303063 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:09:36.125069  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:36.125145  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:36.181962  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:36.181990  303063 cri.go:89] found id: ""
	I0920 19:09:36.182001  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:36.182061  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.186792  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:36.186876  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:36.235963  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:36.235993  303063 cri.go:89] found id: ""
	I0920 19:09:36.236003  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:36.236066  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.241177  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:36.241321  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:36.288324  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.288353  303063 cri.go:89] found id: ""
	I0920 19:09:36.288361  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:36.288415  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.293328  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:36.293413  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:36.335126  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:36.335153  303063 cri.go:89] found id: ""
	I0920 19:09:36.335163  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:36.335226  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.339400  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:36.339470  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:36.375555  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:36.375582  303063 cri.go:89] found id: ""
	I0920 19:09:36.375592  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:36.375657  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.379679  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:36.379753  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:36.415398  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:36.415424  303063 cri.go:89] found id: ""
	I0920 19:09:36.415434  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:36.415495  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.420183  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:36.420260  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:36.462018  303063 cri.go:89] found id: ""
	I0920 19:09:36.462049  303063 logs.go:276] 0 containers: []
	W0920 19:09:36.462060  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:36.462068  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:36.462129  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:36.515520  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:36.515551  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:36.515557  303063 cri.go:89] found id: ""
	I0920 19:09:36.515567  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:36.515628  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.520140  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.524197  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:36.524222  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:36.589535  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:36.589570  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.628836  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:36.628865  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:36.667614  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:36.667654  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:37.164164  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:37.164222  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:37.253505  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:37.253550  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:37.272704  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:37.272742  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:37.315827  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:37.315869  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:37.360449  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:37.360479  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:37.428225  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:37.428270  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:37.469766  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:37.469795  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:37.524517  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:37.524553  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:37.652128  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:37.652162  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:37.036846  302869 out.go:235]   - Generating certificates and keys ...
	I0920 19:09:37.036956  302869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:09:37.037061  302869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:09:37.037194  302869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:09:37.037284  302869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:09:37.037386  302869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:09:37.037462  302869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:09:37.037546  302869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:09:37.037635  302869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:09:37.037734  302869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:09:37.037847  302869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:09:37.037918  302869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:09:37.037995  302869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:09:37.116270  302869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:09:37.615537  302869 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:09:37.907479  302869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:09:38.090167  302869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:09:38.209430  302869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:09:38.209780  302869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:09:38.212626  302869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:09:37.279714  303486 out.go:235]   - Booting up control plane ...
	I0920 19:09:37.279861  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:09:37.288448  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:09:37.289724  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:09:37.290822  303486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:09:37.294106  303486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:09:38.214873  302869 out.go:235]   - Booting up control plane ...
	I0920 19:09:38.214994  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:09:38.215102  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:09:38.215199  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:09:38.232798  302869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:09:38.238716  302869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:09:38.238784  302869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:09:38.370841  302869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:09:38.371037  302869 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:09:38.463252  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:40.463322  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:40.212781  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:09:40.217868  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 200:
	ok
	I0920 19:09:40.219021  303063 api_server.go:141] control plane version: v1.31.1
	I0920 19:09:40.219044  303063 api_server.go:131] duration metric: took 4.094023157s to wait for apiserver health ...
	I0920 19:09:40.219053  303063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:09:40.219077  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:40.219128  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:40.264337  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:40.264365  303063 cri.go:89] found id: ""
	I0920 19:09:40.264376  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:40.264434  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.270143  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:40.270222  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:40.321696  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:40.321723  303063 cri.go:89] found id: ""
	I0920 19:09:40.321733  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:40.321799  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.329068  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:40.329149  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:40.387241  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:40.387329  303063 cri.go:89] found id: ""
	I0920 19:09:40.387357  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:40.387427  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.392896  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:40.392975  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:40.429173  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:40.429200  303063 cri.go:89] found id: ""
	I0920 19:09:40.429210  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:40.429284  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.434102  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:40.434179  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:40.480569  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:40.480598  303063 cri.go:89] found id: ""
	I0920 19:09:40.480607  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:40.480669  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.485821  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:40.485935  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:40.531502  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:40.531543  303063 cri.go:89] found id: ""
	I0920 19:09:40.531554  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:40.531613  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.535699  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:40.535769  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:40.569788  303063 cri.go:89] found id: ""
	I0920 19:09:40.569823  303063 logs.go:276] 0 containers: []
	W0920 19:09:40.569835  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:40.569842  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:40.569928  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:40.604668  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:40.604703  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:40.604710  303063 cri.go:89] found id: ""
	I0920 19:09:40.604721  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:40.604790  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.608948  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.613331  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:40.613360  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:40.657680  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:40.657726  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:40.698087  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:40.698125  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:40.753643  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:40.753683  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:40.791741  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:40.791790  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:41.176451  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:41.176497  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:41.226352  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:41.226386  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:41.307652  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:41.307694  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:41.323271  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:41.323307  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:41.441151  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:41.441195  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:41.495438  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:41.495494  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:41.543879  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:41.543930  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:41.595010  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:41.595055  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:44.140048  303063 system_pods.go:59] 8 kube-system pods found
	I0920 19:09:44.140078  303063 system_pods.go:61] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running
	I0920 19:09:44.140083  303063 system_pods.go:61] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running
	I0920 19:09:44.140087  303063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running
	I0920 19:09:44.140091  303063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running
	I0920 19:09:44.140094  303063 system_pods.go:61] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running
	I0920 19:09:44.140097  303063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running
	I0920 19:09:44.140104  303063 system_pods.go:61] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:44.140108  303063 system_pods.go:61] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running
	I0920 19:09:44.140115  303063 system_pods.go:74] duration metric: took 3.921056539s to wait for pod list to return data ...
	I0920 19:09:44.140122  303063 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:09:44.143381  303063 default_sa.go:45] found service account: "default"
	I0920 19:09:44.143409  303063 default_sa.go:55] duration metric: took 3.281031ms for default service account to be created ...
	I0920 19:09:44.143422  303063 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:09:44.148161  303063 system_pods.go:86] 8 kube-system pods found
	I0920 19:09:44.148191  303063 system_pods.go:89] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running
	I0920 19:09:44.148199  303063 system_pods.go:89] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running
	I0920 19:09:44.148205  303063 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running
	I0920 19:09:44.148212  303063 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running
	I0920 19:09:44.148216  303063 system_pods.go:89] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running
	I0920 19:09:44.148221  303063 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running
	I0920 19:09:44.148230  303063 system_pods.go:89] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:44.148236  303063 system_pods.go:89] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running
	I0920 19:09:44.148248  303063 system_pods.go:126] duration metric: took 4.819429ms to wait for k8s-apps to be running ...
	I0920 19:09:44.148260  303063 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:09:44.148312  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:44.163839  303063 system_svc.go:56] duration metric: took 15.568956ms WaitForService to wait for kubelet
	I0920 19:09:44.163882  303063 kubeadm.go:582] duration metric: took 4m25.468555427s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:09:44.163911  303063 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:09:44.167622  303063 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:09:44.167656  303063 node_conditions.go:123] node cpu capacity is 2
	I0920 19:09:44.167671  303063 node_conditions.go:105] duration metric: took 3.752828ms to run NodePressure ...
	I0920 19:09:44.167690  303063 start.go:241] waiting for startup goroutines ...
	I0920 19:09:44.167700  303063 start.go:246] waiting for cluster config update ...
	I0920 19:09:44.167716  303063 start.go:255] writing updated cluster config ...
	I0920 19:09:44.168208  303063 ssh_runner.go:195] Run: rm -f paused
	I0920 19:09:44.223860  303063 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:09:44.226056  303063 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-612312" cluster and "default" namespace by default
	I0920 19:09:39.373109  302869 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002236347s
	I0920 19:09:39.373229  302869 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:09:44.375102  302869 kubeadm.go:310] [api-check] The API server is healthy after 5.001998039s
	I0920 19:09:44.405405  302869 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:09:44.428364  302869 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:09:44.470575  302869 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:09:44.470870  302869 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-339897 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:09:44.505469  302869 kubeadm.go:310] [bootstrap-token] Using token: v5zzut.gmtb3j9b0yqqwvtv
	I0920 19:09:44.507561  302869 out.go:235]   - Configuring RBAC rules ...
	I0920 19:09:44.507721  302869 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:09:44.522092  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:09:44.555238  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:09:44.559971  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:09:44.566954  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:09:44.574111  302869 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:09:44.788900  302869 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:09:45.229897  302869 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:09:45.788397  302869 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:09:45.789415  302869 kubeadm.go:310] 
	I0920 19:09:45.789504  302869 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:09:45.789516  302869 kubeadm.go:310] 
	I0920 19:09:45.789614  302869 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:09:45.789631  302869 kubeadm.go:310] 
	I0920 19:09:45.789664  302869 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:09:45.789804  302869 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:09:45.789897  302869 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:09:45.789930  302869 kubeadm.go:310] 
	I0920 19:09:45.790043  302869 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:09:45.790061  302869 kubeadm.go:310] 
	I0920 19:09:45.790130  302869 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:09:45.790145  302869 kubeadm.go:310] 
	I0920 19:09:45.790203  302869 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:09:45.790269  302869 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:09:45.790330  302869 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:09:45.790337  302869 kubeadm.go:310] 
	I0920 19:09:45.790438  302869 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:09:45.790549  302869 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:09:45.790563  302869 kubeadm.go:310] 
	I0920 19:09:45.790664  302869 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v5zzut.gmtb3j9b0yqqwvtv \
	I0920 19:09:45.790792  302869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 19:09:45.790823  302869 kubeadm.go:310] 	--control-plane 
	I0920 19:09:45.790835  302869 kubeadm.go:310] 
	I0920 19:09:45.790962  302869 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:09:45.790977  302869 kubeadm.go:310] 
	I0920 19:09:45.791045  302869 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v5zzut.gmtb3j9b0yqqwvtv \
	I0920 19:09:45.791164  302869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 19:09:45.792825  302869 kubeadm.go:310] W0920 19:09:36.880654    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:09:45.793122  302869 kubeadm.go:310] W0920 19:09:36.881516    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:09:45.793273  302869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:09:45.793317  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:09:45.793331  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:09:45.795282  302869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:09:42.464639  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:44.464714  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:45.796961  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:09:45.808972  302869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:09:45.831122  302869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:09:45.831174  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:45.831208  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-339897 minikube.k8s.io/updated_at=2024_09_20T19_09_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=embed-certs-339897 minikube.k8s.io/primary=true
	I0920 19:09:46.057677  302869 ops.go:34] apiserver oom_adj: -16
	I0920 19:09:46.057798  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:46.558670  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:47.057876  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:47.558913  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:48.057925  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:48.557985  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:49.057925  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:49.558500  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:50.058507  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:50.198032  302869 kubeadm.go:1113] duration metric: took 4.366908909s to wait for elevateKubeSystemPrivileges
	I0920 19:09:50.198074  302869 kubeadm.go:394] duration metric: took 5m1.087269263s to StartCluster
	I0920 19:09:50.198100  302869 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:09:50.198209  302869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:09:50.200736  302869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:09:50.201068  302869 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:09:50.201327  302869 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:09:50.201393  302869 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:09:50.201482  302869 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-339897"
	I0920 19:09:50.201502  302869 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-339897"
	W0920 19:09:50.201512  302869 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:09:50.201542  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.202007  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202050  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.202261  302869 addons.go:69] Setting default-storageclass=true in profile "embed-certs-339897"
	I0920 19:09:50.202285  302869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-339897"
	I0920 19:09:50.202285  302869 addons.go:69] Setting metrics-server=true in profile "embed-certs-339897"
	I0920 19:09:50.202311  302869 addons.go:234] Setting addon metrics-server=true in "embed-certs-339897"
	W0920 19:09:50.202319  302869 addons.go:243] addon metrics-server should already be in state true
	I0920 19:09:50.202349  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.202688  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202752  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.202755  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202793  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.203329  302869 out.go:177] * Verifying Kubernetes components...
	I0920 19:09:50.204655  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:09:50.224081  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46289
	I0920 19:09:50.224334  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45801
	I0920 19:09:50.224337  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0920 19:09:50.224579  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.224941  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.225039  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.225214  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225231  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.225643  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.225682  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225699  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.225798  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225818  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.226018  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.226080  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.226564  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.226594  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.226777  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.227444  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.227494  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.229747  302869 addons.go:234] Setting addon default-storageclass=true in "embed-certs-339897"
	W0920 19:09:50.229771  302869 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:09:50.229803  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.230208  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.230261  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.243865  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I0920 19:09:50.244292  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.244828  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.244851  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.245080  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0920 19:09:50.245252  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.245714  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.245810  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.246303  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.246323  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.246661  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.246806  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.248050  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.248671  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.250223  302869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:09:50.250319  302869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:09:46.963562  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:48.965266  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:50.250485  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38237
	I0920 19:09:50.250954  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.251418  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.251435  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.251535  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:09:50.251556  302869 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:09:50.251594  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.251680  302869 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:09:50.251693  302869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:09:50.251706  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.251889  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.252452  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.252502  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.255422  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.255692  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.255902  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.255928  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.256372  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.256396  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.256442  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.256663  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.256697  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.256840  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.256868  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.257066  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.257089  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.257268  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.272424  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I0920 19:09:50.273107  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.273729  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.273746  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.274208  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.274402  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.276189  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.276384  302869 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:09:50.276399  302869 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:09:50.276417  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.279319  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.279718  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.279747  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.279850  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.280044  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.280305  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.280481  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.407262  302869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:09:50.455491  302869 node_ready.go:35] waiting up to 6m0s for node "embed-certs-339897" to be "Ready" ...
	I0920 19:09:50.503634  302869 node_ready.go:49] node "embed-certs-339897" has status "Ready":"True"
	I0920 19:09:50.503663  302869 node_ready.go:38] duration metric: took 48.13478ms for node "embed-certs-339897" to be "Ready" ...
	I0920 19:09:50.503672  302869 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:50.532327  302869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:50.589446  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:09:50.589482  302869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:09:50.613277  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:09:50.619161  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:09:50.662197  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:09:50.662232  302869 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:09:50.753073  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:09:50.753106  302869 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:09:50.842679  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:09:51.790932  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171721983s)
	I0920 19:09:51.790997  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791012  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791029  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.177708427s)
	I0920 19:09:51.791073  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791089  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791380  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.791438  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.791444  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.791483  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791380  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.791527  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.791541  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791556  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791416  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.791493  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.793128  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.793159  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.793177  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.793149  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.793148  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.793208  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.820906  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.820939  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.821290  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.821312  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.003182  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.160452395s)
	I0920 19:09:52.003247  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:52.003261  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:52.003593  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:52.003600  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:52.003622  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.003632  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:52.003640  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:52.003985  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:52.004003  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.004017  302869 addons.go:475] Verifying addon metrics-server=true in "embed-certs-339897"
	I0920 19:09:52.006444  302869 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 19:09:52.008313  302869 addons.go:510] duration metric: took 1.806914162s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 19:09:52.539578  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:53.539999  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:53.540026  302869 pod_ready.go:82] duration metric: took 3.007669334s for pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:53.540036  302869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:51.463340  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:53.963461  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:55.547997  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:57.552686  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.552714  302869 pod_ready.go:82] duration metric: took 4.01267227s for pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.552724  302869 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.560885  302869 pod_ready.go:93] pod "etcd-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.560910  302869 pod_ready.go:82] duration metric: took 8.179457ms for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.560919  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.577414  302869 pod_ready.go:93] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.577441  302869 pod_ready.go:82] duration metric: took 16.515029ms for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.577451  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.588547  302869 pod_ready.go:93] pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.588574  302869 pod_ready.go:82] duration metric: took 11.116334ms for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.588583  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-whcbh" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.594919  302869 pod_ready.go:93] pod "kube-proxy-whcbh" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.594942  302869 pod_ready.go:82] duration metric: took 6.35266ms for pod "kube-proxy-whcbh" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.594951  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.943559  302869 pod_ready.go:93] pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.943585  302869 pod_ready.go:82] duration metric: took 348.626555ms for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.943592  302869 pod_ready.go:39] duration metric: took 7.439908161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:57.943609  302869 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:09:57.943662  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:57.959537  302869 api_server.go:72] duration metric: took 7.758426976s to wait for apiserver process to appear ...
	I0920 19:09:57.959567  302869 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:09:57.959594  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:09:57.964316  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 200:
	ok
	I0920 19:09:57.965668  302869 api_server.go:141] control plane version: v1.31.1
	I0920 19:09:57.965690  302869 api_server.go:131] duration metric: took 6.115168ms to wait for apiserver health ...
	I0920 19:09:57.965697  302869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:09:58.148306  302869 system_pods.go:59] 9 kube-system pods found
	I0920 19:09:58.148339  302869 system_pods.go:61] "coredns-7c65d6cfc9-2zlww" [5eb78763-7160-4ae9-80c3-87a82a6dc992] Running
	I0920 19:09:58.148345  302869 system_pods.go:61] "coredns-7c65d6cfc9-7fxdr" [85a441e8-39b0-4623-a7bd-eebbd1574f20] Running
	I0920 19:09:58.148349  302869 system_pods.go:61] "etcd-embed-certs-339897" [150a2276-3896-498e-89f7-44cf4554da69] Running
	I0920 19:09:58.148352  302869 system_pods.go:61] "kube-apiserver-embed-certs-339897" [396520a3-2567-4267-852d-9f9525dd5e01] Running
	I0920 19:09:58.148356  302869 system_pods.go:61] "kube-controller-manager-embed-certs-339897" [7f64ad97-3230-4cf5-92ad-cf58ef88a2b0] Running
	I0920 19:09:58.148359  302869 system_pods.go:61] "kube-proxy-whcbh" [3a2dbb60-1a51-4874-98b8-75d1a35b0512] Running
	I0920 19:09:58.148361  302869 system_pods.go:61] "kube-scheduler-embed-certs-339897" [31214783-f8cf-46c6-a305-fde7692dfc72] Running
	I0920 19:09:58.148367  302869 system_pods.go:61] "metrics-server-6867b74b74-tw9fh" [8366591d-8916-4b9f-be8a-64ddc185f576] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:58.148371  302869 system_pods.go:61] "storage-provisioner" [8bcc482a-6905-436a-8d90-7eee9ba18f8b] Running
	I0920 19:09:58.148381  302869 system_pods.go:74] duration metric: took 182.677921ms to wait for pod list to return data ...
	I0920 19:09:58.148387  302869 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:09:58.344318  302869 default_sa.go:45] found service account: "default"
	I0920 19:09:58.344346  302869 default_sa.go:55] duration metric: took 195.952788ms for default service account to be created ...
	I0920 19:09:58.344357  302869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:09:58.547996  302869 system_pods.go:86] 9 kube-system pods found
	I0920 19:09:58.548034  302869 system_pods.go:89] "coredns-7c65d6cfc9-2zlww" [5eb78763-7160-4ae9-80c3-87a82a6dc992] Running
	I0920 19:09:58.548043  302869 system_pods.go:89] "coredns-7c65d6cfc9-7fxdr" [85a441e8-39b0-4623-a7bd-eebbd1574f20] Running
	I0920 19:09:58.548048  302869 system_pods.go:89] "etcd-embed-certs-339897" [150a2276-3896-498e-89f7-44cf4554da69] Running
	I0920 19:09:58.548054  302869 system_pods.go:89] "kube-apiserver-embed-certs-339897" [396520a3-2567-4267-852d-9f9525dd5e01] Running
	I0920 19:09:58.548060  302869 system_pods.go:89] "kube-controller-manager-embed-certs-339897" [7f64ad97-3230-4cf5-92ad-cf58ef88a2b0] Running
	I0920 19:09:58.548066  302869 system_pods.go:89] "kube-proxy-whcbh" [3a2dbb60-1a51-4874-98b8-75d1a35b0512] Running
	I0920 19:09:58.548070  302869 system_pods.go:89] "kube-scheduler-embed-certs-339897" [31214783-f8cf-46c6-a305-fde7692dfc72] Running
	I0920 19:09:58.548079  302869 system_pods.go:89] "metrics-server-6867b74b74-tw9fh" [8366591d-8916-4b9f-be8a-64ddc185f576] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:58.548085  302869 system_pods.go:89] "storage-provisioner" [8bcc482a-6905-436a-8d90-7eee9ba18f8b] Running
	I0920 19:09:58.548099  302869 system_pods.go:126] duration metric: took 203.735171ms to wait for k8s-apps to be running ...
	I0920 19:09:58.548108  302869 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:09:58.548165  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:58.563235  302869 system_svc.go:56] duration metric: took 15.107997ms WaitForService to wait for kubelet
	I0920 19:09:58.563274  302869 kubeadm.go:582] duration metric: took 8.362165276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:09:58.563299  302869 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:09:58.744093  302869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:09:58.744155  302869 node_conditions.go:123] node cpu capacity is 2
	I0920 19:09:58.744171  302869 node_conditions.go:105] duration metric: took 180.864643ms to run NodePressure ...
	I0920 19:09:58.744186  302869 start.go:241] waiting for startup goroutines ...
	I0920 19:09:58.744196  302869 start.go:246] waiting for cluster config update ...
	I0920 19:09:58.744220  302869 start.go:255] writing updated cluster config ...
	I0920 19:09:58.744526  302869 ssh_runner.go:195] Run: rm -f paused
	I0920 19:09:58.794946  302869 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:09:58.797418  302869 out.go:177] * Done! kubectl is now configured to use "embed-certs-339897" cluster and "default" namespace by default
	I0920 19:09:56.464024  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:58.464282  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:00.963419  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:02.963506  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:04.963804  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:07.463546  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:09.962855  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:11.963447  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:13.964915  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:17.296411  303486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:10:17.296525  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:17.296765  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:16.462968  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:18.963906  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:22.297630  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:22.297923  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:21.463201  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:22.457112  302538 pod_ready.go:82] duration metric: took 4m0.000881628s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" ...
	E0920 19:10:22.457161  302538 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 19:10:22.457180  302538 pod_ready.go:39] duration metric: took 4m14.047738931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:10:22.457208  302538 kubeadm.go:597] duration metric: took 4m21.028566787s to restartPrimaryControlPlane
	W0920 19:10:22.457265  302538 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:10:22.457291  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:10:32.298239  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:32.298525  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:48.632052  302538 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.17473972s)
	I0920 19:10:48.632143  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:10:48.648205  302538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:10:48.658969  302538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:10:48.668954  302538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:10:48.668981  302538 kubeadm.go:157] found existing configuration files:
	
	I0920 19:10:48.669035  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:10:48.678138  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:10:48.678229  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:10:48.687960  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:10:48.697578  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:10:48.697644  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:10:48.707573  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:10:48.717059  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:10:48.717123  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:10:48.727642  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:10:48.737599  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:10:48.737681  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:10:48.749542  302538 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:10:48.795278  302538 kubeadm.go:310] W0920 19:10:48.780113    2961 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:10:48.796096  302538 kubeadm.go:310] W0920 19:10:48.780928    2961 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:10:48.910958  302538 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:10:52.299257  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:52.299561  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:56.716717  302538 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:10:56.716805  302538 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:10:56.716938  302538 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:10:56.717078  302538 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:10:56.717170  302538 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:10:56.717225  302538 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:10:56.719086  302538 out.go:235]   - Generating certificates and keys ...
	I0920 19:10:56.719199  302538 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:10:56.719286  302538 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:10:56.719407  302538 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:10:56.719505  302538 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:10:56.719624  302538 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:10:56.719720  302538 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:10:56.719811  302538 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:10:56.719928  302538 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:10:56.720049  302538 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:10:56.720154  302538 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:10:56.720224  302538 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:10:56.720287  302538 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:10:56.720334  302538 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:10:56.720386  302538 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:10:56.720432  302538 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:10:56.720486  302538 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:10:56.720533  302538 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:10:56.720606  302538 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:10:56.720701  302538 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:10:56.722504  302538 out.go:235]   - Booting up control plane ...
	I0920 19:10:56.722620  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:10:56.722748  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:10:56.722872  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:10:56.723020  302538 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:10:56.723105  302538 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:10:56.723148  302538 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:10:56.723337  302538 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:10:56.723455  302538 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:10:56.723515  302538 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.448196ms
	I0920 19:10:56.723612  302538 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:10:56.723706  302538 kubeadm.go:310] [api-check] The API server is healthy after 5.001495273s
	I0920 19:10:56.723888  302538 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:10:56.724046  302538 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:10:56.724131  302538 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:10:56.724406  302538 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-037711 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:10:56.724464  302538 kubeadm.go:310] [bootstrap-token] Using token: 2hi1gl.ipidz4nvj8gip8th
	I0920 19:10:56.726099  302538 out.go:235]   - Configuring RBAC rules ...
	I0920 19:10:56.726212  302538 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:10:56.726315  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:10:56.726479  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:10:56.726641  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:10:56.726794  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:10:56.726926  302538 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:10:56.727082  302538 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:10:56.727154  302538 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:10:56.727202  302538 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:10:56.727209  302538 kubeadm.go:310] 
	I0920 19:10:56.727261  302538 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:10:56.727267  302538 kubeadm.go:310] 
	I0920 19:10:56.727363  302538 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:10:56.727383  302538 kubeadm.go:310] 
	I0920 19:10:56.727424  302538 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:10:56.727507  302538 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:10:56.727607  302538 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:10:56.727620  302538 kubeadm.go:310] 
	I0920 19:10:56.727699  302538 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:10:56.727712  302538 kubeadm.go:310] 
	I0920 19:10:56.727775  302538 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:10:56.727790  302538 kubeadm.go:310] 
	I0920 19:10:56.727865  302538 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:10:56.727969  302538 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:10:56.728032  302538 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:10:56.728038  302538 kubeadm.go:310] 
	I0920 19:10:56.728106  302538 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:10:56.728171  302538 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:10:56.728177  302538 kubeadm.go:310] 
	I0920 19:10:56.728271  302538 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2hi1gl.ipidz4nvj8gip8th \
	I0920 19:10:56.728406  302538 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 19:10:56.728438  302538 kubeadm.go:310] 	--control-plane 
	I0920 19:10:56.728451  302538 kubeadm.go:310] 
	I0920 19:10:56.728571  302538 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:10:56.728577  302538 kubeadm.go:310] 
	I0920 19:10:56.728675  302538 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2hi1gl.ipidz4nvj8gip8th \
	I0920 19:10:56.728823  302538 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 19:10:56.728837  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:10:56.728843  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:10:56.730851  302538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:10:56.732462  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:10:56.745326  302538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:10:56.764458  302538 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:10:56.764563  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:56.764620  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-037711 minikube.k8s.io/updated_at=2024_09_20T19_10_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=no-preload-037711 minikube.k8s.io/primary=true
	I0920 19:10:56.792026  302538 ops.go:34] apiserver oom_adj: -16
	I0920 19:10:56.976178  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:57.477172  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:57.977076  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:58.476357  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:58.977162  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:59.476924  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:59.976506  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:11:00.080925  302538 kubeadm.go:1113] duration metric: took 3.316440483s to wait for elevateKubeSystemPrivileges
	I0920 19:11:00.080968  302538 kubeadm.go:394] duration metric: took 4m58.701872852s to StartCluster
	I0920 19:11:00.080994  302538 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:11:00.081106  302538 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:11:00.082815  302538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:11:00.083064  302538 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:11:00.083137  302538 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:11:00.083243  302538 addons.go:69] Setting storage-provisioner=true in profile "no-preload-037711"
	I0920 19:11:00.083263  302538 addons.go:234] Setting addon storage-provisioner=true in "no-preload-037711"
	W0920 19:11:00.083272  302538 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:11:00.083263  302538 addons.go:69] Setting default-storageclass=true in profile "no-preload-037711"
	I0920 19:11:00.083299  302538 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-037711"
	I0920 19:11:00.083308  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.083304  302538 addons.go:69] Setting metrics-server=true in profile "no-preload-037711"
	I0920 19:11:00.083342  302538 addons.go:234] Setting addon metrics-server=true in "no-preload-037711"
	W0920 19:11:00.083354  302538 addons.go:243] addon metrics-server should already be in state true
	I0920 19:11:00.083385  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.083315  302538 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:11:00.083667  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083709  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083715  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.083750  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.083864  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083912  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.084969  302538 out.go:177] * Verifying Kubernetes components...
	I0920 19:11:00.086652  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:11:00.102128  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0920 19:11:00.102362  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33853
	I0920 19:11:00.102750  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0920 19:11:00.102879  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103041  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103431  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103635  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.103651  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.103767  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.103783  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.104022  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.104040  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.104042  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104180  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104383  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104394  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.104842  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.104881  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.104927  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.104963  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.107816  302538 addons.go:234] Setting addon default-storageclass=true in "no-preload-037711"
	W0920 19:11:00.107836  302538 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:11:00.107865  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.108193  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.108236  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.121661  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0920 19:11:00.122693  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.123520  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.123642  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.124299  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.124530  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.125624  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I0920 19:11:00.126343  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.126439  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I0920 19:11:00.126868  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.126947  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.127277  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.127302  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.127572  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.127599  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.127646  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.127902  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.128095  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.128318  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.128360  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.129099  302538 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:11:00.129788  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.130688  302538 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:11:00.130713  302538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:11:00.130732  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.131393  302538 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:11:00.132404  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:11:00.132432  302538 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:11:00.132454  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.134112  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.134627  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.134690  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.135041  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.135215  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.135448  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.135550  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.136315  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.136816  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.136849  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.137011  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.137231  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.137409  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.137589  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.166369  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0920 19:11:00.166884  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.167464  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.167483  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.167850  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.168037  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.169668  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.169875  302538 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:11:00.169891  302538 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:11:00.169925  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.172907  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.173383  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.173416  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.173577  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.173820  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.174010  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.174212  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.275468  302538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:11:00.290839  302538 node_ready.go:35] waiting up to 6m0s for node "no-preload-037711" to be "Ready" ...
	I0920 19:11:00.300222  302538 node_ready.go:49] node "no-preload-037711" has status "Ready":"True"
	I0920 19:11:00.300244  302538 node_ready.go:38] duration metric: took 9.368069ms for node "no-preload-037711" to be "Ready" ...
	I0920 19:11:00.300253  302538 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:11:00.306099  302538 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:00.364927  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:11:00.364956  302538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:11:00.382910  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:11:00.392581  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:11:00.392611  302538 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:11:00.404275  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:11:00.442677  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:11:00.442707  302538 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:11:00.500976  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:11:01.337157  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337196  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337169  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337265  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337558  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.337573  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.337600  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337613  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.337641  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337649  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337685  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337702  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.337711  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337720  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337961  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337978  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.338064  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.338114  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.338133  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.395956  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.395989  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.396327  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.396355  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580133  302538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.079115769s)
	I0920 19:11:01.580188  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.580203  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.580548  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.580568  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580578  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.580586  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.580817  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.580842  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580853  302538 addons.go:475] Verifying addon metrics-server=true in "no-preload-037711"
	I0920 19:11:01.582786  302538 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 19:11:01.584283  302538 addons.go:510] duration metric: took 1.501156808s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 19:11:02.314471  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:04.817174  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:07.312399  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:07.812969  302538 pod_ready.go:93] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:07.812999  302538 pod_ready.go:82] duration metric: took 7.506877081s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.813008  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.818172  302538 pod_ready.go:93] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:07.818200  302538 pod_ready.go:82] duration metric: took 5.184579ms for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.818211  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:09.825772  302538 pod_ready.go:103] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:10.325453  302538 pod_ready.go:93] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:10.325479  302538 pod_ready.go:82] duration metric: took 2.507262085s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.325489  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.331181  302538 pod_ready.go:93] pod "kube-scheduler-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:10.331208  302538 pod_ready.go:82] duration metric: took 5.711573ms for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.331216  302538 pod_ready.go:39] duration metric: took 10.030954081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:11:10.331233  302538 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:11:10.331286  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:11:10.348104  302538 api_server.go:72] duration metric: took 10.265008499s to wait for apiserver process to appear ...
	I0920 19:11:10.348135  302538 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:11:10.348157  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:11:10.352242  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0920 19:11:10.353228  302538 api_server.go:141] control plane version: v1.31.1
	I0920 19:11:10.353249  302538 api_server.go:131] duration metric: took 5.107446ms to wait for apiserver health ...
	I0920 19:11:10.353257  302538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:11:10.358560  302538 system_pods.go:59] 9 kube-system pods found
	I0920 19:11:10.358588  302538 system_pods.go:61] "coredns-7c65d6cfc9-gdfh9" [61c6d6d8-62b9-4db3-a3c3-fd0daec82a9f] Running
	I0920 19:11:10.358593  302538 system_pods.go:61] "coredns-7c65d6cfc9-h84nm" [6ada3ba7-1ccd-474b-850b-c00a77dfbb92] Running
	I0920 19:11:10.358597  302538 system_pods.go:61] "etcd-no-preload-037711" [9ace2dcd-0562-46d5-99be-65be4ea053d9] Running
	I0920 19:11:10.358601  302538 system_pods.go:61] "kube-apiserver-no-preload-037711" [1dbfa130-d2dd-420d-a32c-1e82b535c112] Running
	I0920 19:11:10.358604  302538 system_pods.go:61] "kube-controller-manager-no-preload-037711" [56462390-dedd-4281-ac85-2671f7a10cb1] Running
	I0920 19:11:10.358607  302538 system_pods.go:61] "kube-proxy-bvfqh" [2170ef3f-58f0-4d42-9f15-d9c952e0e2ec] Running
	I0920 19:11:10.358610  302538 system_pods.go:61] "kube-scheduler-no-preload-037711" [e996ce53-7ee6-4d1d-bd0b-8188d76966b9] Running
	I0920 19:11:10.358617  302538 system_pods.go:61] "metrics-server-6867b74b74-rpfqm" [ba7c8518-6c3e-4751-a9a5-29c77990a29c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:11:10.358620  302538 system_pods.go:61] "storage-provisioner" [e7f05c0a-c6be-4e68-959e-966c17c9cc5e] Running
	I0920 19:11:10.358629  302538 system_pods.go:74] duration metric: took 5.365343ms to wait for pod list to return data ...
	I0920 19:11:10.358635  302538 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:11:10.361229  302538 default_sa.go:45] found service account: "default"
	I0920 19:11:10.361255  302538 default_sa.go:55] duration metric: took 2.612292ms for default service account to be created ...
	I0920 19:11:10.361264  302538 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:11:10.367188  302538 system_pods.go:86] 9 kube-system pods found
	I0920 19:11:10.367221  302538 system_pods.go:89] "coredns-7c65d6cfc9-gdfh9" [61c6d6d8-62b9-4db3-a3c3-fd0daec82a9f] Running
	I0920 19:11:10.367229  302538 system_pods.go:89] "coredns-7c65d6cfc9-h84nm" [6ada3ba7-1ccd-474b-850b-c00a77dfbb92] Running
	I0920 19:11:10.367235  302538 system_pods.go:89] "etcd-no-preload-037711" [9ace2dcd-0562-46d5-99be-65be4ea053d9] Running
	I0920 19:11:10.367241  302538 system_pods.go:89] "kube-apiserver-no-preload-037711" [1dbfa130-d2dd-420d-a32c-1e82b535c112] Running
	I0920 19:11:10.367248  302538 system_pods.go:89] "kube-controller-manager-no-preload-037711" [56462390-dedd-4281-ac85-2671f7a10cb1] Running
	I0920 19:11:10.367254  302538 system_pods.go:89] "kube-proxy-bvfqh" [2170ef3f-58f0-4d42-9f15-d9c952e0e2ec] Running
	I0920 19:11:10.367260  302538 system_pods.go:89] "kube-scheduler-no-preload-037711" [e996ce53-7ee6-4d1d-bd0b-8188d76966b9] Running
	I0920 19:11:10.367267  302538 system_pods.go:89] "metrics-server-6867b74b74-rpfqm" [ba7c8518-6c3e-4751-a9a5-29c77990a29c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:11:10.367273  302538 system_pods.go:89] "storage-provisioner" [e7f05c0a-c6be-4e68-959e-966c17c9cc5e] Running
	I0920 19:11:10.367283  302538 system_pods.go:126] duration metric: took 6.01247ms to wait for k8s-apps to be running ...
	I0920 19:11:10.367292  302538 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:11:10.367354  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:11:10.381551  302538 system_svc.go:56] duration metric: took 14.250301ms WaitForService to wait for kubelet
	I0920 19:11:10.381582  302538 kubeadm.go:582] duration metric: took 10.298492318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:11:10.381601  302538 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:11:10.385405  302538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:11:10.385442  302538 node_conditions.go:123] node cpu capacity is 2
	I0920 19:11:10.385455  302538 node_conditions.go:105] duration metric: took 3.849463ms to run NodePressure ...
	I0920 19:11:10.385468  302538 start.go:241] waiting for startup goroutines ...
	I0920 19:11:10.385474  302538 start.go:246] waiting for cluster config update ...
	I0920 19:11:10.385485  302538 start.go:255] writing updated cluster config ...
	I0920 19:11:10.385786  302538 ssh_runner.go:195] Run: rm -f paused
	I0920 19:11:10.436362  302538 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:11:10.438538  302538 out.go:177] * Done! kubectl is now configured to use "no-preload-037711" cluster and "default" namespace by default
	I0920 19:11:32.301334  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:11:32.302020  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:11:32.302048  303486 kubeadm.go:310] 
	I0920 19:11:32.302147  303486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:11:32.302252  303486 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:11:32.302279  303486 kubeadm.go:310] 
	I0920 19:11:32.302366  303486 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:11:32.302453  303486 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:11:32.302713  303486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:11:32.302731  303486 kubeadm.go:310] 
	I0920 19:11:32.303023  303486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:11:32.303099  303486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:11:32.303200  303486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:11:32.303232  303486 kubeadm.go:310] 
	I0920 19:11:32.303438  303486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:11:32.303669  303486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:11:32.303699  303486 kubeadm.go:310] 
	I0920 19:11:32.303965  303486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:11:32.304199  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:11:32.304410  303486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:11:32.304577  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:11:32.304624  303486 kubeadm.go:310] 
	I0920 19:11:32.305105  303486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:11:32.305465  303486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:11:32.305655  303486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 19:11:32.305713  303486 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 19:11:32.305758  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:11:32.760742  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:11:32.775675  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:11:32.785785  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:11:32.785806  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:11:32.785854  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:11:32.795133  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:11:32.795210  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:11:32.805681  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:11:32.815299  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:11:32.815362  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:11:32.827215  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:11:32.836597  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:11:32.836682  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:11:32.846621  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:11:32.855610  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:11:32.855675  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:11:32.866824  303486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:11:33.103745  303486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:13:29.101212  303486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:13:29.101347  303486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 19:13:29.103031  303486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:13:29.103142  303486 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:13:29.103216  303486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:13:29.103318  303486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:13:29.103437  303486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:13:29.103507  303486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:13:29.105521  303486 out.go:235]   - Generating certificates and keys ...
	I0920 19:13:29.105622  303486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:13:29.105704  303486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:13:29.105820  303486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:13:29.105955  303486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:13:29.106058  303486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:13:29.106132  303486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:13:29.106219  303486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:13:29.106318  303486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:13:29.106430  303486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:13:29.106548  303486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:13:29.106611  303486 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:13:29.106699  303486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:13:29.106766  303486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:13:29.106844  303486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:13:29.106935  303486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:13:29.107011  303486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:13:29.107117  303486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:13:29.107223  303486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:13:29.107289  303486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:13:29.107376  303486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:13:29.108804  303486 out.go:235]   - Booting up control plane ...
	I0920 19:13:29.108952  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:13:29.109021  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:13:29.109082  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:13:29.109166  303486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:13:29.109313  303486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:13:29.109359  303486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:13:29.109462  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.109630  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.109699  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.109878  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.109966  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110133  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110213  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110382  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110441  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110606  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110616  303486 kubeadm.go:310] 
	I0920 19:13:29.110661  303486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:13:29.110699  303486 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:13:29.110706  303486 kubeadm.go:310] 
	I0920 19:13:29.110739  303486 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:13:29.110769  303486 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:13:29.110866  303486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:13:29.110875  303486 kubeadm.go:310] 
	I0920 19:13:29.110969  303486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:13:29.111003  303486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:13:29.111031  303486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:13:29.111037  303486 kubeadm.go:310] 
	I0920 19:13:29.111141  303486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:13:29.111224  303486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:13:29.111231  303486 kubeadm.go:310] 
	I0920 19:13:29.111327  303486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:13:29.111407  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:13:29.111481  303486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:13:29.111542  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:13:29.111610  303486 kubeadm.go:394] duration metric: took 7m56.768319159s to StartCluster
	I0920 19:13:29.111640  303486 kubeadm.go:310] 
	I0920 19:13:29.111664  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:13:29.111734  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:13:29.157817  303486 cri.go:89] found id: ""
	I0920 19:13:29.157849  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.157859  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:13:29.157867  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:13:29.157950  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:13:29.192130  303486 cri.go:89] found id: ""
	I0920 19:13:29.192164  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.192179  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:13:29.192187  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:13:29.192243  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:13:29.227594  303486 cri.go:89] found id: ""
	I0920 19:13:29.227631  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.227642  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:13:29.227651  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:13:29.227724  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:13:29.261948  303486 cri.go:89] found id: ""
	I0920 19:13:29.261981  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.261996  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:13:29.262004  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:13:29.262072  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:13:29.295148  303486 cri.go:89] found id: ""
	I0920 19:13:29.295181  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.295191  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:13:29.295200  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:13:29.295270  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:13:29.328094  303486 cri.go:89] found id: ""
	I0920 19:13:29.328127  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.328135  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:13:29.328142  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:13:29.328194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:13:29.368830  303486 cri.go:89] found id: ""
	I0920 19:13:29.368870  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.368878  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:13:29.368885  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:13:29.368947  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:13:29.420051  303486 cri.go:89] found id: ""
	I0920 19:13:29.420081  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.420091  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:13:29.420106  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:13:29.420123  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:13:29.498322  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:13:29.498350  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:13:29.498364  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:13:29.601796  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:13:29.601842  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:13:29.644325  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:13:29.644368  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:13:29.692691  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:13:29.692736  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0920 19:13:29.707508  303486 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 19:13:29.707577  303486 out.go:270] * 
	W0920 19:13:29.707646  303486 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:13:29.707664  303486 out.go:270] * 
	W0920 19:13:29.708560  303486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 19:13:29.711313  303486 out.go:201] 
	W0920 19:13:29.712520  303486 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:13:29.712553  303486 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 19:13:29.712576  303486 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 19:13:29.713832  303486 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.632669944Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860012632644370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7314979b-f6dc-438e-ba5e-b74a95005d5f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.633262010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc221a7d-30d6-4b46-afb8-99fafb70232c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.633323035Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc221a7d-30d6-4b46-afb8-99fafb70232c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.633524188Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c3b2c73c79f03835b2d7f9bbcac7a9e7daab8399d4394c4f5e57edfe00b04ca,PodSandboxId:5c5864ca73b60ef9f89df1ef0451bc957e0ebdb49178a80c23078b851315309a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859462020503944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f05c0a-c6be-4e68-959e-966c17c9cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:490f881a4145983bd5f22b576485c4167b96e3d03a669a764d2b77d254dbf8c9,PodSandboxId:8e0201363b51353b38f798de986fb30d95024019a1f71029376b7641cdfb392f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461924533095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h84nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ada3ba7-1ccd-474b-850b-c00a77dfbb92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3ba4d23673d8beab73a909b9507fce9cc7b80319a2ae48cd1cfd7ea08e5886,PodSandboxId:8984188f2284180601ab0bef07fd03d01c0f41c8e650d91a06bd15bf331625af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461886672209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdfh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61
c6d6d8-62b9-4db3-a3c3-fd0daec82a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc3cf8747bd605b197ec515407d007bf6797f30c00b192fc7c04f5b68554df6,PodSandboxId:bc2edec3c4385ecac9ef4cfb1f527fe8422d93c09aaa0077df38f25faec360ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726859461267199152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvfqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2170ef3f-58f0-4d42-9f15-d9c952e0e2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa21f43834f66104e826a833971dede5d642811a29cb6fa3b34b5bfe378a890,PodSandboxId:766829d60dae13109dc74c598c065a06cac4eb91829b41d96c478098bd304244,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859450705134369,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c3d500e11904aa0df64b9be940c73c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782f8908af7306fd228aadd0d307c4db0a42502fea560906311314ac5c6e0b68,PodSandboxId:c5c7ac8990434e3448492ac229767311a5f3b425b397903688ae3cf776db9afd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859450661715750,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad8cc73a7e9abe289ec90a38b985562,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3e6413ae85d5fba0d2fef8822758664194d4f1df781937591a684aabe9ec9c,PodSandboxId:299cb322be009f759f2ffc3b60c7f9b4281f8c034303ae8d9c7c1c9cfd692ed2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859450708544893,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72cb8cb71b12a9dd016202c6ee7de79a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f5bf350558c57eadea4484da5b43d0633125ef5339961e8a0b179f0f4d660f,PodSandboxId:83fb186f2e25d7f973e0915b3c020ba520ff9f155b0c2c61a74864f9e1b44992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859450618363481,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c427dc7b4fa266356a38b8def1c1cce91a76dd31495176c88816cd1310deed,PodSandboxId:b3bbf11fb11f2b91152f3dbaa44973c84519501dfdd75f2a6a5157de8b4232c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859163605757361,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc221a7d-30d6-4b46-afb8-99fafb70232c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.671667207Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3b81325-9aa7-4145-a973-8d1974fc988f name=/runtime.v1.RuntimeService/Version
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.671755526Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3b81325-9aa7-4145-a973-8d1974fc988f name=/runtime.v1.RuntimeService/Version
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.672774148Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b854dbdd-4c41-4392-affd-d10b073f30d5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.673169078Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860012673147393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b854dbdd-4c41-4392-affd-d10b073f30d5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.673657319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=45a9f23d-3938-451c-aefd-22cd9f906fca name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.673738239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=45a9f23d-3938-451c-aefd-22cd9f906fca name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.673987931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c3b2c73c79f03835b2d7f9bbcac7a9e7daab8399d4394c4f5e57edfe00b04ca,PodSandboxId:5c5864ca73b60ef9f89df1ef0451bc957e0ebdb49178a80c23078b851315309a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859462020503944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f05c0a-c6be-4e68-959e-966c17c9cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:490f881a4145983bd5f22b576485c4167b96e3d03a669a764d2b77d254dbf8c9,PodSandboxId:8e0201363b51353b38f798de986fb30d95024019a1f71029376b7641cdfb392f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461924533095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h84nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ada3ba7-1ccd-474b-850b-c00a77dfbb92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3ba4d23673d8beab73a909b9507fce9cc7b80319a2ae48cd1cfd7ea08e5886,PodSandboxId:8984188f2284180601ab0bef07fd03d01c0f41c8e650d91a06bd15bf331625af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461886672209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdfh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61
c6d6d8-62b9-4db3-a3c3-fd0daec82a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc3cf8747bd605b197ec515407d007bf6797f30c00b192fc7c04f5b68554df6,PodSandboxId:bc2edec3c4385ecac9ef4cfb1f527fe8422d93c09aaa0077df38f25faec360ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726859461267199152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvfqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2170ef3f-58f0-4d42-9f15-d9c952e0e2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa21f43834f66104e826a833971dede5d642811a29cb6fa3b34b5bfe378a890,PodSandboxId:766829d60dae13109dc74c598c065a06cac4eb91829b41d96c478098bd304244,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859450705134369,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c3d500e11904aa0df64b9be940c73c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782f8908af7306fd228aadd0d307c4db0a42502fea560906311314ac5c6e0b68,PodSandboxId:c5c7ac8990434e3448492ac229767311a5f3b425b397903688ae3cf776db9afd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859450661715750,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad8cc73a7e9abe289ec90a38b985562,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3e6413ae85d5fba0d2fef8822758664194d4f1df781937591a684aabe9ec9c,PodSandboxId:299cb322be009f759f2ffc3b60c7f9b4281f8c034303ae8d9c7c1c9cfd692ed2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859450708544893,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72cb8cb71b12a9dd016202c6ee7de79a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f5bf350558c57eadea4484da5b43d0633125ef5339961e8a0b179f0f4d660f,PodSandboxId:83fb186f2e25d7f973e0915b3c020ba520ff9f155b0c2c61a74864f9e1b44992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859450618363481,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c427dc7b4fa266356a38b8def1c1cce91a76dd31495176c88816cd1310deed,PodSandboxId:b3bbf11fb11f2b91152f3dbaa44973c84519501dfdd75f2a6a5157de8b4232c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859163605757361,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=45a9f23d-3938-451c-aefd-22cd9f906fca name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.716275620Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e1b5e91-1527-4b96-a6ae-9fa8896de1c7 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.716470985Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e1b5e91-1527-4b96-a6ae-9fa8896de1c7 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.720018497Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3a6c12e-909d-480a-bb54-913811ae04a0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.720344340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860012720324223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3a6c12e-909d-480a-bb54-913811ae04a0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.721053025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e81e4337-b220-418b-981b-2047e8093254 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.721110688Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e81e4337-b220-418b-981b-2047e8093254 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.721331715Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c3b2c73c79f03835b2d7f9bbcac7a9e7daab8399d4394c4f5e57edfe00b04ca,PodSandboxId:5c5864ca73b60ef9f89df1ef0451bc957e0ebdb49178a80c23078b851315309a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859462020503944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f05c0a-c6be-4e68-959e-966c17c9cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:490f881a4145983bd5f22b576485c4167b96e3d03a669a764d2b77d254dbf8c9,PodSandboxId:8e0201363b51353b38f798de986fb30d95024019a1f71029376b7641cdfb392f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461924533095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h84nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ada3ba7-1ccd-474b-850b-c00a77dfbb92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3ba4d23673d8beab73a909b9507fce9cc7b80319a2ae48cd1cfd7ea08e5886,PodSandboxId:8984188f2284180601ab0bef07fd03d01c0f41c8e650d91a06bd15bf331625af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461886672209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdfh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61
c6d6d8-62b9-4db3-a3c3-fd0daec82a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc3cf8747bd605b197ec515407d007bf6797f30c00b192fc7c04f5b68554df6,PodSandboxId:bc2edec3c4385ecac9ef4cfb1f527fe8422d93c09aaa0077df38f25faec360ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726859461267199152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvfqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2170ef3f-58f0-4d42-9f15-d9c952e0e2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa21f43834f66104e826a833971dede5d642811a29cb6fa3b34b5bfe378a890,PodSandboxId:766829d60dae13109dc74c598c065a06cac4eb91829b41d96c478098bd304244,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859450705134369,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c3d500e11904aa0df64b9be940c73c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782f8908af7306fd228aadd0d307c4db0a42502fea560906311314ac5c6e0b68,PodSandboxId:c5c7ac8990434e3448492ac229767311a5f3b425b397903688ae3cf776db9afd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859450661715750,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad8cc73a7e9abe289ec90a38b985562,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3e6413ae85d5fba0d2fef8822758664194d4f1df781937591a684aabe9ec9c,PodSandboxId:299cb322be009f759f2ffc3b60c7f9b4281f8c034303ae8d9c7c1c9cfd692ed2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859450708544893,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72cb8cb71b12a9dd016202c6ee7de79a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f5bf350558c57eadea4484da5b43d0633125ef5339961e8a0b179f0f4d660f,PodSandboxId:83fb186f2e25d7f973e0915b3c020ba520ff9f155b0c2c61a74864f9e1b44992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859450618363481,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c427dc7b4fa266356a38b8def1c1cce91a76dd31495176c88816cd1310deed,PodSandboxId:b3bbf11fb11f2b91152f3dbaa44973c84519501dfdd75f2a6a5157de8b4232c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859163605757361,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e81e4337-b220-418b-981b-2047e8093254 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.753743732Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63b2481a-a43e-4355-89df-55c917336c7e name=/runtime.v1.RuntimeService/Version
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.753897840Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63b2481a-a43e-4355-89df-55c917336c7e name=/runtime.v1.RuntimeService/Version
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.755323548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=552eb029-2e1e-45dd-b0ff-7a304c251d21 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.755953317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860012755927025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=552eb029-2e1e-45dd-b0ff-7a304c251d21 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.756474404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db805dc2-9d79-4161-a936-9b57664b0e64 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.756539237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db805dc2-9d79-4161-a936-9b57664b0e64 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:20:12 no-preload-037711 crio[701]: time="2024-09-20 19:20:12.756729203Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c3b2c73c79f03835b2d7f9bbcac7a9e7daab8399d4394c4f5e57edfe00b04ca,PodSandboxId:5c5864ca73b60ef9f89df1ef0451bc957e0ebdb49178a80c23078b851315309a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859462020503944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f05c0a-c6be-4e68-959e-966c17c9cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:490f881a4145983bd5f22b576485c4167b96e3d03a669a764d2b77d254dbf8c9,PodSandboxId:8e0201363b51353b38f798de986fb30d95024019a1f71029376b7641cdfb392f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461924533095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h84nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ada3ba7-1ccd-474b-850b-c00a77dfbb92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3ba4d23673d8beab73a909b9507fce9cc7b80319a2ae48cd1cfd7ea08e5886,PodSandboxId:8984188f2284180601ab0bef07fd03d01c0f41c8e650d91a06bd15bf331625af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461886672209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdfh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61
c6d6d8-62b9-4db3-a3c3-fd0daec82a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc3cf8747bd605b197ec515407d007bf6797f30c00b192fc7c04f5b68554df6,PodSandboxId:bc2edec3c4385ecac9ef4cfb1f527fe8422d93c09aaa0077df38f25faec360ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726859461267199152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvfqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2170ef3f-58f0-4d42-9f15-d9c952e0e2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa21f43834f66104e826a833971dede5d642811a29cb6fa3b34b5bfe378a890,PodSandboxId:766829d60dae13109dc74c598c065a06cac4eb91829b41d96c478098bd304244,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859450705134369,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c3d500e11904aa0df64b9be940c73c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782f8908af7306fd228aadd0d307c4db0a42502fea560906311314ac5c6e0b68,PodSandboxId:c5c7ac8990434e3448492ac229767311a5f3b425b397903688ae3cf776db9afd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859450661715750,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad8cc73a7e9abe289ec90a38b985562,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3e6413ae85d5fba0d2fef8822758664194d4f1df781937591a684aabe9ec9c,PodSandboxId:299cb322be009f759f2ffc3b60c7f9b4281f8c034303ae8d9c7c1c9cfd692ed2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859450708544893,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72cb8cb71b12a9dd016202c6ee7de79a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f5bf350558c57eadea4484da5b43d0633125ef5339961e8a0b179f0f4d660f,PodSandboxId:83fb186f2e25d7f973e0915b3c020ba520ff9f155b0c2c61a74864f9e1b44992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859450618363481,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c427dc7b4fa266356a38b8def1c1cce91a76dd31495176c88816cd1310deed,PodSandboxId:b3bbf11fb11f2b91152f3dbaa44973c84519501dfdd75f2a6a5157de8b4232c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859163605757361,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db805dc2-9d79-4161-a936-9b57664b0e64 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c3b2c73c79f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   5c5864ca73b60       storage-provisioner
	490f881a41459       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   8e0201363b513       coredns-7c65d6cfc9-h84nm
	4b3ba4d23673d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   8984188f22841       coredns-7c65d6cfc9-gdfh9
	7cc3cf8747bd6       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   bc2edec3c4385       kube-proxy-bvfqh
	2c3e6413ae85d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   299cb322be009       etcd-no-preload-037711
	3fa21f43834f6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   766829d60dae1       kube-scheduler-no-preload-037711
	782f8908af730       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   c5c7ac8990434       kube-controller-manager-no-preload-037711
	14f5bf350558c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   83fb186f2e25d       kube-apiserver-no-preload-037711
	21c427dc7b4fa       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   b3bbf11fb11f2       kube-apiserver-no-preload-037711
	
	
	==> coredns [490f881a4145983bd5f22b576485c4167b96e3d03a669a764d2b77d254dbf8c9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [4b3ba4d23673d8beab73a909b9507fce9cc7b80319a2ae48cd1cfd7ea08e5886] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-037711
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-037711
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=no-preload-037711
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T19_10_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:10:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-037711
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:20:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:16:12 +0000   Fri, 20 Sep 2024 19:10:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:16:12 +0000   Fri, 20 Sep 2024 19:10:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:16:12 +0000   Fri, 20 Sep 2024 19:10:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:16:12 +0000   Fri, 20 Sep 2024 19:10:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.136
	  Hostname:    no-preload-037711
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 87f8e7b26b6046a299dad16c24bc5fb5
	  System UUID:                87f8e7b2-6b60-46a2-99da-d16c24bc5fb5
	  Boot ID:                    f31ff828-9158-466e-b0c9-85bfb6a5fd29
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-gdfh9                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 coredns-7c65d6cfc9-h84nm                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 etcd-no-preload-037711                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-no-preload-037711             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-no-preload-037711    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-bvfqh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-scheduler-no-preload-037711             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-rpfqm              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m10s  kube-proxy       
	  Normal  Starting                 9m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node no-preload-037711 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node no-preload-037711 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node no-preload-037711 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s  node-controller  Node no-preload-037711 event: Registered Node no-preload-037711 in Controller
	
	
	==> dmesg <==
	[  +0.059001] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041040] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.059321] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.957086] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.556378] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.961994] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.065549] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056560] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.183658] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.141063] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.276193] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[Sep20 19:06] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.065860] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.852843] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[  +4.628659] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.928437] kauditd_printk_skb: 90 callbacks suppressed
	[Sep20 19:10] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.341484] systemd-fstab-generator[2987]: Ignoring "noauto" option for root device
	[  +4.348387] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.695749] systemd-fstab-generator[3308]: Ignoring "noauto" option for root device
	[  +4.406032] systemd-fstab-generator[3410]: Ignoring "noauto" option for root device
	[  +0.096149] kauditd_printk_skb: 14 callbacks suppressed
	[Sep20 19:11] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [2c3e6413ae85d5fba0d2fef8822758664194d4f1df781937591a684aabe9ec9c] <==
	{"level":"info","ts":"2024-09-20T19:10:51.259442Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-20T19:10:51.259455Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-20T19:10:51.260216Z","caller":"etcdserver/server.go:751","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"3f5f02872cabb0b8","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-20T19:10:51.260785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f5f02872cabb0b8 switched to configuration voters=(4566371326770262200)"}
	{"level":"info","ts":"2024-09-20T19:10:51.260910Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"46ee4f926852f428","local-member-id":"3f5f02872cabb0b8","added-peer-id":"3f5f02872cabb0b8","added-peer-peer-urls":["https://192.168.61.136:2380"]}
	{"level":"info","ts":"2024-09-20T19:10:51.315920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f5f02872cabb0b8 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T19:10:51.316046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f5f02872cabb0b8 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T19:10:51.316095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f5f02872cabb0b8 received MsgPreVoteResp from 3f5f02872cabb0b8 at term 1"}
	{"level":"info","ts":"2024-09-20T19:10:51.316141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f5f02872cabb0b8 became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T19:10:51.316166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f5f02872cabb0b8 received MsgVoteResp from 3f5f02872cabb0b8 at term 2"}
	{"level":"info","ts":"2024-09-20T19:10:51.316194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f5f02872cabb0b8 became leader at term 2"}
	{"level":"info","ts":"2024-09-20T19:10:51.316219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3f5f02872cabb0b8 elected leader 3f5f02872cabb0b8 at term 2"}
	{"level":"info","ts":"2024-09-20T19:10:51.322017Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:10:51.326136Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3f5f02872cabb0b8","local-member-attributes":"{Name:no-preload-037711 ClientURLs:[https://192.168.61.136:2379]}","request-path":"/0/members/3f5f02872cabb0b8/attributes","cluster-id":"46ee4f926852f428","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T19:10:51.327036Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:10:51.327220Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T19:10:51.327275Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T19:10:51.329960Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"46ee4f926852f428","local-member-id":"3f5f02872cabb0b8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:10:51.330056Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:10:51.330101Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:10:51.330713Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:10:51.333680Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T19:10:51.327820Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:10:51.348752Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:10:51.355585Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.136:2379"}
	
	
	==> kernel <==
	 19:20:13 up 14 min,  0 users,  load average: 0.26, 0.15, 0.10
	Linux no-preload-037711 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [14f5bf350558c57eadea4484da5b43d0633125ef5339961e8a0b179f0f4d660f] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0920 19:15:54.192000       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:15:54.192101       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0920 19:15:54.193141       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:15:54.193178       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 19:16:54.193976       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:16:54.194039       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 19:16:54.194087       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:16:54.194143       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 19:16:54.195280       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:16:54.195338       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 19:18:54.196380       1 handler_proxy.go:99] no RequestInfo found in the context
	W0920 19:18:54.196403       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:18:54.196938       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0920 19:18:54.196978       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 19:18:54.198175       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:18:54.198280       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [21c427dc7b4fa266356a38b8def1c1cce91a76dd31495176c88816cd1310deed] <==
	W0920 19:10:43.500243       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.536320       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.536323       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.571974       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.614261       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.635180       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.694155       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.739228       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.745951       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.802792       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.865914       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.918390       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.989320       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.998999       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.999011       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:44.027755       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:44.035331       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:44.112795       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:44.142711       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:44.426135       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:45.900024       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:47.675613       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:47.989502       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:48.075493       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:48.165403       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [782f8908af7306fd228aadd0d307c4db0a42502fea560906311314ac5c6e0b68] <==
	E0920 19:15:00.160462       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:15:00.625969       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:15:30.168244       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:15:30.639329       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:16:00.177914       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:16:00.648878       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 19:16:12.157210       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-037711"
	E0920 19:16:30.184511       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:16:30.658695       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 19:17:00.008345       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="222.571µs"
	E0920 19:17:00.192235       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:17:00.667490       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 19:17:12.007143       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="66.349µs"
	E0920 19:17:30.199963       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:17:30.679245       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:18:00.208324       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:18:00.687578       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:18:30.214758       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:18:30.697622       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:19:00.222529       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:19:00.708532       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:19:30.229409       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:19:30.730055       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:20:00.236606       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:20:00.738495       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7cc3cf8747bd605b197ec515407d007bf6797f30c00b192fc7c04f5b68554df6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 19:11:02.259699       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 19:11:02.290659       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.136"]
	E0920 19:11:02.290770       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:11:02.355314       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 19:11:02.355369       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 19:11:02.355403       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:11:02.357714       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:11:02.358168       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:11:02.358413       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:11:02.360233       1 config.go:199] "Starting service config controller"
	I0920 19:11:02.360327       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:11:02.360429       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:11:02.360477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:11:02.361120       1 config.go:328] "Starting node config controller"
	I0920 19:11:02.361186       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:11:02.460995       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 19:11:02.461065       1 shared_informer.go:320] Caches are synced for service config
	I0920 19:11:02.461278       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3fa21f43834f66104e826a833971dede5d642811a29cb6fa3b34b5bfe378a890] <==
	W0920 19:10:53.218926       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 19:10:53.219355       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 19:10:53.219401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 19:10:53.219496       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.118047       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 19:10:54.118133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.131590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 19:10:54.131645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.200227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 19:10:54.200280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.352150       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 19:10:54.352953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.393946       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 19:10:54.393993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.407494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 19:10:54.407542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.411387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 19:10:54.411458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.501781       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 19:10:54.501892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.506334       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 19:10:54.506500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.705819       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 19:10:54.705897       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 19:10:57.008062       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:19:02 no-preload-037711 kubelet[3315]: E0920 19:19:02.989506    3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rpfqm" podUID="ba7c8518-6c3e-4751-a9a5-29c77990a29c"
	Sep 20 19:19:06 no-preload-037711 kubelet[3315]: E0920 19:19:06.177940    3315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859946177538786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:19:06 no-preload-037711 kubelet[3315]: E0920 19:19:06.178818    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859946177538786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:19:16 no-preload-037711 kubelet[3315]: E0920 19:19:16.180109    3315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859956179781784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:19:16 no-preload-037711 kubelet[3315]: E0920 19:19:16.180485    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859956179781784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:19:16 no-preload-037711 kubelet[3315]: E0920 19:19:16.989462    3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rpfqm" podUID="ba7c8518-6c3e-4751-a9a5-29c77990a29c"
	Sep 20 19:19:26 no-preload-037711 kubelet[3315]: E0920 19:19:26.182092    3315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859966181703397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:19:26 no-preload-037711 kubelet[3315]: E0920 19:19:26.183945    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859966181703397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:19:30 no-preload-037711 kubelet[3315]: E0920 19:19:30.989568    3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rpfqm" podUID="ba7c8518-6c3e-4751-a9a5-29c77990a29c"
	Sep 20 19:19:36 no-preload-037711 kubelet[3315]: E0920 19:19:36.184868    3315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859976184564766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:19:36 no-preload-037711 kubelet[3315]: E0920 19:19:36.184907    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859976184564766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:19:42 no-preload-037711 kubelet[3315]: E0920 19:19:42.989884    3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rpfqm" podUID="ba7c8518-6c3e-4751-a9a5-29c77990a29c"
	Sep 20 19:19:46 no-preload-037711 kubelet[3315]: E0920 19:19:46.188059    3315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859986186491410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:19:46 no-preload-037711 kubelet[3315]: E0920 19:19:46.188108    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859986186491410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:19:54 no-preload-037711 kubelet[3315]: E0920 19:19:54.989300    3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rpfqm" podUID="ba7c8518-6c3e-4751-a9a5-29c77990a29c"
	Sep 20 19:19:56 no-preload-037711 kubelet[3315]: E0920 19:19:56.026459    3315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 19:19:56 no-preload-037711 kubelet[3315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 19:19:56 no-preload-037711 kubelet[3315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 19:19:56 no-preload-037711 kubelet[3315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 19:19:56 no-preload-037711 kubelet[3315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 19:19:56 no-preload-037711 kubelet[3315]: E0920 19:19:56.189433    3315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859996189120408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:19:56 no-preload-037711 kubelet[3315]: E0920 19:19:56.189462    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859996189120408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:20:06 no-preload-037711 kubelet[3315]: E0920 19:20:06.191955    3315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860006191645186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:20:06 no-preload-037711 kubelet[3315]: E0920 19:20:06.192275    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860006191645186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:20:07 no-preload-037711 kubelet[3315]: E0920 19:20:07.989246    3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rpfqm" podUID="ba7c8518-6c3e-4751-a9a5-29c77990a29c"
	
	
	==> storage-provisioner [8c3b2c73c79f03835b2d7f9bbcac7a9e7daab8399d4394c4f5e57edfe00b04ca] <==
	I0920 19:11:02.303189       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 19:11:02.316278       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 19:11:02.320120       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 19:11:02.333563       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 19:11:02.334100       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24acebcb-4eea-4da5-80db-8fd1c1b18ecf", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-037711_f26a285c-8115-4e61-9cd0-a7b287203681 became leader
	I0920 19:11:02.336962       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-037711_f26a285c-8115-4e61-9cd0-a7b287203681!
	I0920 19:11:02.437703       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-037711_f26a285c-8115-4e61-9cd0-a7b287203681!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-037711 -n no-preload-037711
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-037711 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-rpfqm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-037711 describe pod metrics-server-6867b74b74-rpfqm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-037711 describe pod metrics-server-6867b74b74-rpfqm: exit status 1 (69.96324ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-rpfqm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-037711 describe pod metrics-server-6867b74b74-rpfqm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:13:33.962970  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:14:15.617322  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:14:27.162468  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:14:48.376964  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:14:57.029396  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:15:13.963267  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:15:38.680118  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:15:50.144312  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:15:51.355008  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:16:11.441977  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:16:37.026964  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:17:13.211168  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:17:14.419826  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:17:29.486952  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:18:04.095715  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:18:30.941835  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:18:33.963016  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:19:15.617847  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:19:48.376458  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:20:13.963003  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:20:50.144282  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:20:51.354763  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:21:34.015868  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:22:29.486795  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-425599 -n old-k8s-version-425599
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-425599 -n old-k8s-version-425599: exit status 2 (262.1951ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-425599" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599: exit status 2 (236.906091ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-425599 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-425599 logs -n 25: (1.695518762s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-793540 sudo cat                             | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo find                            | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo crio                            | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-793540                                      | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-896665 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | disable-driver-mounts-896665                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:57 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-037711             | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-339897            | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-612312  | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-037711                  | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC | 20 Sep 24 19:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-339897                 | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-425599        | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612312       | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-425599             | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:01:28
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:01:28.948776  303486 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:01:28.948894  303486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:01:28.948900  303486 out.go:358] Setting ErrFile to fd 2...
	I0920 19:01:28.948906  303486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:01:28.949090  303486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 19:01:28.949637  303486 out.go:352] Setting JSON to false
	I0920 19:01:28.950705  303486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9832,"bootTime":1726849057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:01:28.950809  303486 start.go:139] virtualization: kvm guest
	I0920 19:01:28.953226  303486 out.go:177] * [old-k8s-version-425599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:01:28.955013  303486 notify.go:220] Checking for updates...
	I0920 19:01:28.955065  303486 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 19:01:28.956932  303486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:01:28.959076  303486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:01:28.961116  303486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 19:01:28.963396  303486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:01:28.965428  303486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:01:28.967688  303486 config.go:182] Loaded profile config "old-k8s-version-425599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:01:28.968112  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:01:28.968175  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:01:28.984002  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
	I0920 19:01:28.984552  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:01:28.985260  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:01:28.985291  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:01:28.985715  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:01:28.985972  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:01:28.988070  303486 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 19:01:28.989565  303486 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:01:28.990007  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:01:28.990079  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:01:29.006020  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34059
	I0920 19:01:29.006492  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:01:29.007046  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:01:29.007078  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:01:29.007441  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:01:29.007706  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:01:29.049785  303486 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 19:01:29.051185  303486 start.go:297] selected driver: kvm2
	I0920 19:01:29.051206  303486 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:01:29.051323  303486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:01:29.052030  303486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:01:29.052131  303486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:01:29.068826  303486 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:01:29.069232  303486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:01:29.069262  303486 cni.go:84] Creating CNI manager for ""
	I0920 19:01:29.069297  303486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:01:29.069333  303486 start.go:340] cluster config:
	{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:01:29.069439  303486 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:01:29.071617  303486 out.go:177] * Starting "old-k8s-version-425599" primary control-plane node in "old-k8s-version-425599" cluster
	I0920 19:01:27.086248  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:29.073133  303486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:01:29.073174  303486 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 19:01:29.073182  303486 cache.go:56] Caching tarball of preloaded images
	I0920 19:01:29.073269  303486 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:01:29.073285  303486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 19:01:29.073388  303486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 19:01:29.073573  303486 start.go:360] acquireMachinesLock for old-k8s-version-425599: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:01:33.166258  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:36.238261  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:42.318195  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:45.390223  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:51.470272  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:54.542277  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:00.622232  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:03.694275  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:09.774241  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:12.846248  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:18.926213  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:21.998195  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:28.078192  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:31.150239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:37.230160  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:40.302224  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:46.382225  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:49.454205  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:55.534186  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:58.606232  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:04.686254  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:07.758234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:13.838239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:16.910321  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:22.990234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:26.062339  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:32.142210  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:35.214256  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:41.294234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:44.366288  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:50.446215  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:53.518266  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:59.598190  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:02.670240  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:08.750179  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:11.822239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:17.902176  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:20.974235  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:23.977804  302869 start.go:364] duration metric: took 4m19.519175605s to acquireMachinesLock for "embed-certs-339897"
	I0920 19:04:23.977868  302869 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:04:23.977876  302869 fix.go:54] fixHost starting: 
	I0920 19:04:23.978233  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:04:23.978277  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:04:23.993804  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0920 19:04:23.994326  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:04:23.994906  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:04:23.994925  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:04:23.995219  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:04:23.995413  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:23.995575  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:04:23.997417  302869 fix.go:112] recreateIfNeeded on embed-certs-339897: state=Stopped err=<nil>
	I0920 19:04:23.997439  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	W0920 19:04:23.997636  302869 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:04:24.001021  302869 out.go:177] * Restarting existing kvm2 VM for "embed-certs-339897" ...
	I0920 19:04:24.002636  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Start
	I0920 19:04:24.002842  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring networks are active...
	I0920 19:04:24.003916  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring network default is active
	I0920 19:04:24.004282  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring network mk-embed-certs-339897 is active
	I0920 19:04:24.004647  302869 main.go:141] libmachine: (embed-certs-339897) Getting domain xml...
	I0920 19:04:24.005446  302869 main.go:141] libmachine: (embed-certs-339897) Creating domain...
	I0920 19:04:23.975096  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:04:23.975155  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:04:23.975457  302538 buildroot.go:166] provisioning hostname "no-preload-037711"
	I0920 19:04:23.975485  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:04:23.975712  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:04:23.977607  302538 machine.go:96] duration metric: took 4m37.412034117s to provisionDockerMachine
	I0920 19:04:23.977703  302538 fix.go:56] duration metric: took 4m37.437032108s for fixHost
	I0920 19:04:23.977718  302538 start.go:83] releasing machines lock for "no-preload-037711", held for 4m37.437081737s
	W0920 19:04:23.977745  302538 start.go:714] error starting host: provision: host is not running
	W0920 19:04:23.977850  302538 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 19:04:23.977859  302538 start.go:729] Will try again in 5 seconds ...
	I0920 19:04:25.258221  302869 main.go:141] libmachine: (embed-certs-339897) Waiting to get IP...
	I0920 19:04:25.259119  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.259493  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.259584  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.259481  304091 retry.go:31] will retry after 212.462393ms: waiting for machine to come up
	I0920 19:04:25.474057  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.474524  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.474564  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.474441  304091 retry.go:31] will retry after 306.01691ms: waiting for machine to come up
	I0920 19:04:25.782264  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.782729  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.782753  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.782706  304091 retry.go:31] will retry after 416.637796ms: waiting for machine to come up
	I0920 19:04:26.201336  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:26.201704  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:26.201738  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:26.201645  304091 retry.go:31] will retry after 583.373452ms: waiting for machine to come up
	I0920 19:04:26.786448  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:26.786854  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:26.786876  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:26.786807  304091 retry.go:31] will retry after 760.706965ms: waiting for machine to come up
	I0920 19:04:27.548786  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:27.549126  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:27.549149  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:27.549088  304091 retry.go:31] will retry after 615.829194ms: waiting for machine to come up
	I0920 19:04:28.167061  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:28.167601  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:28.167647  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:28.167419  304091 retry.go:31] will retry after 786.700064ms: waiting for machine to come up
	I0920 19:04:28.955294  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:28.955658  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:28.955685  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:28.955611  304091 retry.go:31] will retry after 1.309567829s: waiting for machine to come up
	I0920 19:04:28.979506  302538 start.go:360] acquireMachinesLock for no-preload-037711: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:04:30.267104  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:30.267645  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:30.267676  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:30.267583  304091 retry.go:31] will retry after 1.153396834s: waiting for machine to come up
	I0920 19:04:31.423030  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:31.423604  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:31.423629  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:31.423542  304091 retry.go:31] will retry after 1.858288741s: waiting for machine to come up
	I0920 19:04:33.284886  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:33.285381  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:33.285419  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:33.285334  304091 retry.go:31] will retry after 2.343802005s: waiting for machine to come up
	I0920 19:04:35.630962  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:35.631408  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:35.631439  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:35.631359  304091 retry.go:31] will retry after 2.42254126s: waiting for machine to come up
	I0920 19:04:38.055128  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:38.055796  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:38.055854  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:38.055732  304091 retry.go:31] will retry after 3.877296828s: waiting for machine to come up
	I0920 19:04:43.362725  303063 start.go:364] duration metric: took 4m20.211671699s to acquireMachinesLock for "default-k8s-diff-port-612312"
	I0920 19:04:43.362794  303063 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:04:43.362810  303063 fix.go:54] fixHost starting: 
	I0920 19:04:43.363257  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:04:43.363315  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:04:43.380877  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34751
	I0920 19:04:43.381399  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:04:43.381894  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:04:43.381933  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:04:43.382364  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:04:43.382596  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:04:43.382746  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:04:43.384351  303063 fix.go:112] recreateIfNeeded on default-k8s-diff-port-612312: state=Stopped err=<nil>
	I0920 19:04:43.384379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	W0920 19:04:43.384540  303063 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:04:43.386969  303063 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-612312" ...
	I0920 19:04:41.936215  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.936789  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has current primary IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.936811  302869 main.go:141] libmachine: (embed-certs-339897) Found IP for machine: 192.168.72.72
	I0920 19:04:41.936823  302869 main.go:141] libmachine: (embed-certs-339897) Reserving static IP address...
	I0920 19:04:41.937386  302869 main.go:141] libmachine: (embed-certs-339897) Reserved static IP address: 192.168.72.72
	I0920 19:04:41.937412  302869 main.go:141] libmachine: (embed-certs-339897) Waiting for SSH to be available...
	I0920 19:04:41.937435  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "embed-certs-339897", mac: "52:54:00:dc:b1:41", ip: "192.168.72.72"} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:41.937466  302869 main.go:141] libmachine: (embed-certs-339897) DBG | skip adding static IP to network mk-embed-certs-339897 - found existing host DHCP lease matching {name: "embed-certs-339897", mac: "52:54:00:dc:b1:41", ip: "192.168.72.72"}
	I0920 19:04:41.937481  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Getting to WaitForSSH function...
	I0920 19:04:41.939673  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.940065  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:41.940089  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.940196  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Using SSH client type: external
	I0920 19:04:41.940223  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa (-rw-------)
	I0920 19:04:41.940261  302869 main.go:141] libmachine: (embed-certs-339897) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:04:41.940274  302869 main.go:141] libmachine: (embed-certs-339897) DBG | About to run SSH command:
	I0920 19:04:41.940285  302869 main.go:141] libmachine: (embed-certs-339897) DBG | exit 0
	I0920 19:04:42.065967  302869 main.go:141] libmachine: (embed-certs-339897) DBG | SSH cmd err, output: <nil>: 
	I0920 19:04:42.066357  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetConfigRaw
	I0920 19:04:42.067004  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:42.069586  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.069937  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.069968  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.070208  302869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/config.json ...
	I0920 19:04:42.070452  302869 machine.go:93] provisionDockerMachine start ...
	I0920 19:04:42.070478  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:42.070687  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.072878  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.073340  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.073375  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.073501  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.073701  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.073899  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.074080  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.074254  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.074504  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.074523  302869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:04:42.182250  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:04:42.182287  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.182543  302869 buildroot.go:166] provisioning hostname "embed-certs-339897"
	I0920 19:04:42.182570  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.182818  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.185497  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.185850  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.185886  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.186069  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.186274  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.186421  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.186568  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.186770  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.186986  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.187006  302869 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-339897 && echo "embed-certs-339897" | sudo tee /etc/hostname
	I0920 19:04:42.307656  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-339897
	
	I0920 19:04:42.307700  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.310572  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.310943  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.310970  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.311184  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.311382  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.311534  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.311663  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.311810  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.311984  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.312003  302869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-339897' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-339897/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-339897' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:04:42.426403  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:04:42.426440  302869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:04:42.426493  302869 buildroot.go:174] setting up certificates
	I0920 19:04:42.426502  302869 provision.go:84] configureAuth start
	I0920 19:04:42.426513  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.426822  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:42.429708  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.430134  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.430170  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.430328  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.432799  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.433222  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.433251  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.433383  302869 provision.go:143] copyHostCerts
	I0920 19:04:42.433466  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:04:42.433487  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:04:42.433549  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:04:42.433644  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:04:42.433652  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:04:42.433678  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:04:42.433735  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:04:42.433742  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:04:42.433762  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:04:42.433811  302869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.embed-certs-339897 san=[127.0.0.1 192.168.72.72 embed-certs-339897 localhost minikube]
	I0920 19:04:42.745528  302869 provision.go:177] copyRemoteCerts
	I0920 19:04:42.745599  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:04:42.745633  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.748247  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.748587  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.748619  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.748811  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.749014  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.749201  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.749334  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:42.831927  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:04:42.855674  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:04:42.879114  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 19:04:42.902982  302869 provision.go:87] duration metric: took 476.462339ms to configureAuth
	I0920 19:04:42.903019  302869 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:04:42.903236  302869 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:04:42.903321  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.906208  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.906580  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.906613  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.906810  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.907006  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.907136  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.907263  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.907427  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.907601  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.907616  302869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:04:43.127800  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:04:43.127847  302869 machine.go:96] duration metric: took 1.057372659s to provisionDockerMachine
	I0920 19:04:43.127864  302869 start.go:293] postStartSetup for "embed-certs-339897" (driver="kvm2")
	I0920 19:04:43.127890  302869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:04:43.127917  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.128263  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:04:43.128298  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.131648  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.132138  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.132173  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.132340  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.132560  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.132739  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.132896  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.216646  302869 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:04:43.220513  302869 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:04:43.220548  302869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:04:43.220629  302869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:04:43.220734  302869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:04:43.220862  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:04:43.230506  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:04:43.252894  302869 start.go:296] duration metric: took 125.003067ms for postStartSetup
	I0920 19:04:43.252943  302869 fix.go:56] duration metric: took 19.275066559s for fixHost
	I0920 19:04:43.252971  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.255999  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.256378  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.256406  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.256634  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.256858  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.257047  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.257214  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.257382  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:43.257546  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:43.257556  302869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:04:43.362516  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859083.339291891
	
	I0920 19:04:43.362545  302869 fix.go:216] guest clock: 1726859083.339291891
	I0920 19:04:43.362553  302869 fix.go:229] Guest: 2024-09-20 19:04:43.339291891 +0000 UTC Remote: 2024-09-20 19:04:43.25294824 +0000 UTC m=+278.942139838 (delta=86.343651ms)
	I0920 19:04:43.362585  302869 fix.go:200] guest clock delta is within tolerance: 86.343651ms
	I0920 19:04:43.362591  302869 start.go:83] releasing machines lock for "embed-certs-339897", held for 19.38474105s
	I0920 19:04:43.362620  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.362970  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:43.365988  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.366359  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.366380  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.366610  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367130  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367326  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367423  302869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:04:43.367469  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.367602  302869 ssh_runner.go:195] Run: cat /version.json
	I0920 19:04:43.367628  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.370233  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370594  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.370624  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370649  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370804  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.370998  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.371169  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.371191  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.371249  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.371406  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.371470  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.371566  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.371721  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.371885  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.490023  302869 ssh_runner.go:195] Run: systemctl --version
	I0920 19:04:43.496615  302869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:04:43.643493  302869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:04:43.649492  302869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:04:43.649560  302869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:04:43.665423  302869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:04:43.665460  302869 start.go:495] detecting cgroup driver to use...
	I0920 19:04:43.665530  302869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:04:43.681288  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:04:43.695161  302869 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:04:43.695218  302869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:04:43.708772  302869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:04:43.722803  302869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:04:43.834054  302869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:04:43.966014  302869 docker.go:233] disabling docker service ...
	I0920 19:04:43.966102  302869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:04:43.982324  302869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:04:43.995351  302869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:04:44.135635  302869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:04:44.262661  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:04:44.277377  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:04:44.299889  302869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:04:44.299965  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.312434  302869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:04:44.312534  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.323052  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.333504  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.343704  302869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:04:44.354386  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.364308  302869 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.383581  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.395013  302869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:04:44.405227  302869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:04:44.405279  302869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:04:44.418685  302869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:04:44.431323  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:04:44.558582  302869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:04:44.644003  302869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:04:44.644091  302869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:04:44.649434  302869 start.go:563] Will wait 60s for crictl version
	I0920 19:04:44.649498  302869 ssh_runner.go:195] Run: which crictl
	I0920 19:04:44.653334  302869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:04:44.695896  302869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:04:44.696004  302869 ssh_runner.go:195] Run: crio --version
	I0920 19:04:44.726148  302869 ssh_runner.go:195] Run: crio --version
	I0920 19:04:44.757340  302869 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:04:43.388378  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Start
	I0920 19:04:43.388603  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring networks are active...
	I0920 19:04:43.389387  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring network default is active
	I0920 19:04:43.389863  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring network mk-default-k8s-diff-port-612312 is active
	I0920 19:04:43.390364  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Getting domain xml...
	I0920 19:04:43.391121  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Creating domain...
	I0920 19:04:44.718004  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting to get IP...
	I0920 19:04:44.718885  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.719317  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.719413  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:44.719288  304227 retry.go:31] will retry after 197.63251ms: waiting for machine to come up
	I0920 19:04:44.919026  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.919516  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.919547  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:44.919475  304227 retry.go:31] will retry after 305.409091ms: waiting for machine to come up
	I0920 19:04:45.227550  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.228191  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.228224  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:45.228147  304227 retry.go:31] will retry after 311.72219ms: waiting for machine to come up
	I0920 19:04:45.541945  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.542374  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.542403  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:45.542344  304227 retry.go:31] will retry after 547.369471ms: waiting for machine to come up
	I0920 19:04:46.091199  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.091731  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.091765  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:46.091693  304227 retry.go:31] will retry after 519.190971ms: waiting for machine to come up
	I0920 19:04:46.612175  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.612641  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.612672  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:46.612591  304227 retry.go:31] will retry after 715.908704ms: waiting for machine to come up
	I0920 19:04:47.330911  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:47.331350  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:47.331379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:47.331294  304227 retry.go:31] will retry after 898.358136ms: waiting for machine to come up
	I0920 19:04:44.759090  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:44.762331  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:44.762696  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:44.762728  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:44.762954  302869 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 19:04:44.767209  302869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:04:44.781327  302869 kubeadm.go:883] updating cluster {Name:embed-certs-339897 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:04:44.781465  302869 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:04:44.781512  302869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:04:44.817356  302869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:04:44.817422  302869 ssh_runner.go:195] Run: which lz4
	I0920 19:04:44.821534  302869 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:04:44.826169  302869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:04:44.826205  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 19:04:46.160290  302869 crio.go:462] duration metric: took 1.338795677s to copy over tarball
	I0920 19:04:46.160379  302869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:04:48.265535  302869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.105118482s)
	I0920 19:04:48.265580  302869 crio.go:469] duration metric: took 2.105250135s to extract the tarball
	I0920 19:04:48.265588  302869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:04:48.302529  302869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:04:48.346391  302869 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:04:48.346419  302869 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:04:48.346427  302869 kubeadm.go:934] updating node { 192.168.72.72 8443 v1.31.1 crio true true} ...
	I0920 19:04:48.346566  302869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-339897 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:04:48.346668  302869 ssh_runner.go:195] Run: crio config
	I0920 19:04:48.396798  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:04:48.396824  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:04:48.396834  302869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:04:48.396866  302869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.72 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-339897 NodeName:embed-certs-339897 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:04:48.397043  302869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-339897"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:04:48.397121  302869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:04:48.407031  302869 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:04:48.407118  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:04:48.416554  302869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 19:04:48.432540  302869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:04:48.448042  302869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0920 19:04:48.465193  302869 ssh_runner.go:195] Run: grep 192.168.72.72	control-plane.minikube.internal$ /etc/hosts
	I0920 19:04:48.469083  302869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:04:48.481123  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:04:48.609883  302869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:04:48.627512  302869 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897 for IP: 192.168.72.72
	I0920 19:04:48.627545  302869 certs.go:194] generating shared ca certs ...
	I0920 19:04:48.627571  302869 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:04:48.627784  302869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:04:48.627851  302869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:04:48.627866  302869 certs.go:256] generating profile certs ...
	I0920 19:04:48.628032  302869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/client.key
	I0920 19:04:48.628143  302869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.key.308547ed
	I0920 19:04:48.628206  302869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.key
	I0920 19:04:48.628375  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:04:48.628421  302869 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:04:48.628435  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:04:48.628470  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:04:48.628509  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:04:48.628542  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:04:48.628616  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:04:48.629569  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:04:48.656203  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:04:48.708322  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:04:48.737686  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:04:48.772198  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 19:04:48.812086  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:04:48.836038  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:04:48.859972  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:04:48.883881  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:04:48.908399  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:04:48.930787  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:04:48.954052  302869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:04:48.970257  302869 ssh_runner.go:195] Run: openssl version
	I0920 19:04:48.976072  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:04:48.986449  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.990765  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.990833  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.996437  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:04:49.007111  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:04:49.017548  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.022044  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.022108  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.027752  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:04:49.038538  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:04:49.049445  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.054018  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.054100  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.059842  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:04:49.070748  302869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:04:49.075195  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:04:49.081100  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:04:49.086844  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:04:49.092790  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:04:49.098664  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:04:49.104562  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:04:49.110818  302869 kubeadm.go:392] StartCluster: {Name:embed-certs-339897 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:04:49.110952  302869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:04:49.111003  302869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:04:49.157700  302869 cri.go:89] found id: ""
	I0920 19:04:49.157774  302869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:04:49.168314  302869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:04:49.168339  302869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:04:49.168385  302869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:04:49.178632  302869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:04:49.179681  302869 kubeconfig.go:125] found "embed-certs-339897" server: "https://192.168.72.72:8443"
	I0920 19:04:49.181624  302869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:04:49.192084  302869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.72
	I0920 19:04:49.192159  302869 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:04:49.192188  302869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:04:49.192265  302869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:04:49.229141  302869 cri.go:89] found id: ""
	I0920 19:04:49.229232  302869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:04:49.247628  302869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:04:49.258190  302869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:04:49.258211  302869 kubeadm.go:157] found existing configuration files:
	
	I0920 19:04:49.258270  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:04:49.267769  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:04:49.267837  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:04:49.277473  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:04:49.286639  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:04:49.286712  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:04:49.296295  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:04:49.305705  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:04:49.305787  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:04:49.315191  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:04:49.324206  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:04:49.324288  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:04:49.334065  302869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:04:49.344823  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:48.231405  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:48.231846  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:48.231872  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:48.231795  304227 retry.go:31] will retry after 1.105264539s: waiting for machine to come up
	I0920 19:04:49.338940  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:49.339413  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:49.339437  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:49.339366  304227 retry.go:31] will retry after 1.638536651s: waiting for machine to come up
	I0920 19:04:50.980320  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:50.980774  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:50.980805  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:50.980714  304227 retry.go:31] will retry after 2.064766522s: waiting for machine to come up
	I0920 19:04:49.450454  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.412643  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.629144  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.694547  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.756897  302869 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:04:50.757008  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:51.258120  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:51.758025  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.258040  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.757302  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.774867  302869 api_server.go:72] duration metric: took 2.017964832s to wait for apiserver process to appear ...
	I0920 19:04:52.774906  302869 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:04:52.774954  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.383214  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:04:55.383255  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:04:55.383272  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.406625  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:04:55.406660  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:04:55.775825  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.785126  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:04:55.785157  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:04:56.275864  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:56.284002  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:04:56.284032  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:04:56.775547  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:56.779999  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 200:
	ok
	I0920 19:04:56.786034  302869 api_server.go:141] control plane version: v1.31.1
	I0920 19:04:56.786066  302869 api_server.go:131] duration metric: took 4.011153019s to wait for apiserver health ...
	I0920 19:04:56.786076  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:04:56.786082  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:04:56.788195  302869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:04:53.047487  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:53.048005  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:53.048027  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:53.047958  304227 retry.go:31] will retry after 2.829648578s: waiting for machine to come up
	I0920 19:04:55.879069  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:55.879538  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:55.879562  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:55.879488  304227 retry.go:31] will retry after 3.029828813s: waiting for machine to come up
	I0920 19:04:56.789703  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:04:56.799605  302869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:04:56.816974  302869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:04:56.828470  302869 system_pods.go:59] 8 kube-system pods found
	I0920 19:04:56.828582  302869 system_pods.go:61] "coredns-7c65d6cfc9-xnfsk" [5e34a8b9-d748-484a-92ab-0d288ab5f35e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:04:56.828610  302869 system_pods.go:61] "etcd-embed-certs-339897" [1d0e8303-0ab9-418c-ba2d-f0ba33abad36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:04:56.828637  302869 system_pods.go:61] "kube-apiserver-embed-certs-339897" [35569778-54b1-456d-8822-5a53a5e336fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:04:56.828655  302869 system_pods.go:61] "kube-controller-manager-embed-certs-339897" [6b9db655-59a1-4975-b3c7-fcc29a912850] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:04:56.828677  302869 system_pods.go:61] "kube-proxy-xs4nd" [a32f4c96-ae6e-4402-89c5-0226a4412d17] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:04:56.828694  302869 system_pods.go:61] "kube-scheduler-embed-certs-339897" [81dd07df-2ba9-4f8e-bb16-263bd6496a0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:04:56.828716  302869 system_pods.go:61] "metrics-server-6867b74b74-qqhcw" [b720a331-05ef-4528-bd25-0c1e7ef66b16] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:04:56.828729  302869 system_pods.go:61] "storage-provisioner" [08674813-f61d-49e9-a714-5f38b95f058e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:04:56.828738  302869 system_pods.go:74] duration metric: took 11.732519ms to wait for pod list to return data ...
	I0920 19:04:56.828748  302869 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:04:56.835747  302869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:04:56.835786  302869 node_conditions.go:123] node cpu capacity is 2
	I0920 19:04:56.835799  302869 node_conditions.go:105] duration metric: took 7.044914ms to run NodePressure ...
	I0920 19:04:56.835822  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:57.221422  302869 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:04:57.225575  302869 kubeadm.go:739] kubelet initialised
	I0920 19:04:57.225601  302869 kubeadm.go:740] duration metric: took 4.150722ms waiting for restarted kubelet to initialise ...
	I0920 19:04:57.225610  302869 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:04:57.230469  302869 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace to be "Ready" ...
	I0920 19:04:59.237961  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:58.911412  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:58.911990  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:58.912020  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:58.911956  304227 retry.go:31] will retry after 3.428044067s: waiting for machine to come up
	I0920 19:05:02.343216  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.343633  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Found IP for machine: 192.168.50.230
	I0920 19:05:02.343668  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has current primary IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.343679  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Reserving static IP address...
	I0920 19:05:02.344038  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Reserved static IP address: 192.168.50.230
	I0920 19:05:02.344084  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-612312", mac: "52:54:00:fa:2b:63", ip: "192.168.50.230"} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.344097  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for SSH to be available...
	I0920 19:05:02.344123  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | skip adding static IP to network mk-default-k8s-diff-port-612312 - found existing host DHCP lease matching {name: "default-k8s-diff-port-612312", mac: "52:54:00:fa:2b:63", ip: "192.168.50.230"}
	I0920 19:05:02.344136  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Getting to WaitForSSH function...
	I0920 19:05:02.346591  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.346932  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.346957  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.347110  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Using SSH client type: external
	I0920 19:05:02.347157  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa (-rw-------)
	I0920 19:05:02.347194  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:02.347214  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | About to run SSH command:
	I0920 19:05:02.347227  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | exit 0
	I0920 19:05:02.474040  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:02.474475  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetConfigRaw
	I0920 19:05:02.475160  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:02.477963  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.478338  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.478361  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.478680  303063 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/config.json ...
	I0920 19:05:02.478923  303063 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:02.478949  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:02.479166  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.481380  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.481759  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.481797  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.481961  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.482149  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.482307  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.482458  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.482619  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.482883  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.482900  303063 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:02.586360  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:02.586395  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.586694  303063 buildroot.go:166] provisioning hostname "default-k8s-diff-port-612312"
	I0920 19:05:02.586720  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.586951  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.589692  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.590053  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.590080  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.590230  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.590420  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.590563  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.590722  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.590936  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.591112  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.591126  303063 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-612312 && echo "default-k8s-diff-port-612312" | sudo tee /etc/hostname
	I0920 19:05:02.707768  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-612312
	
	I0920 19:05:02.707799  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.710647  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.711035  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.711064  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.711234  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.711448  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.711622  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.711791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.711938  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.712098  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.712116  303063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-612312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-612312/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-612312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:02.828234  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:02.828274  303063 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:02.828314  303063 buildroot.go:174] setting up certificates
	I0920 19:05:02.828327  303063 provision.go:84] configureAuth start
	I0920 19:05:02.828340  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.828700  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:02.831997  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.832469  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.832503  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.832704  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.835280  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.835577  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.835608  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.835699  303063 provision.go:143] copyHostCerts
	I0920 19:05:02.835766  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:02.835787  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:02.835848  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:02.835947  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:02.835955  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:02.835975  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:02.836026  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:02.836033  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:02.836055  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:02.836103  303063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-612312 san=[127.0.0.1 192.168.50.230 default-k8s-diff-port-612312 localhost minikube]
	I0920 19:05:02.983437  303063 provision.go:177] copyRemoteCerts
	I0920 19:05:02.983510  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:02.983541  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.986435  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.986791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.986835  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.987110  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.987289  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.987438  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.987579  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.674961  303486 start.go:364] duration metric: took 3m34.601349843s to acquireMachinesLock for "old-k8s-version-425599"
	I0920 19:05:03.675039  303486 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:05:03.675048  303486 fix.go:54] fixHost starting: 
	I0920 19:05:03.675480  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:03.675541  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:03.694201  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I0920 19:05:03.694642  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:03.695198  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:05:03.695221  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:03.695609  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:03.695793  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:03.695935  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetState
	I0920 19:05:03.697838  303486 fix.go:112] recreateIfNeeded on old-k8s-version-425599: state=Stopped err=<nil>
	I0920 19:05:03.697885  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	W0920 19:05:03.698080  303486 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:05:03.700333  303486 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-425599" ...
	I0920 19:05:03.701947  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .Start
	I0920 19:05:03.702184  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring networks are active...
	I0920 19:05:03.703106  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network default is active
	I0920 19:05:03.703645  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network mk-old-k8s-version-425599 is active
	I0920 19:05:03.704152  303486 main.go:141] libmachine: (old-k8s-version-425599) Getting domain xml...
	I0920 19:05:03.704942  303486 main.go:141] libmachine: (old-k8s-version-425599) Creating domain...
	I0920 19:05:01.738488  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:03.238934  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:03.238968  302869 pod_ready.go:82] duration metric: took 6.008471722s for pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.238978  302869 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.746041  302869 pod_ready.go:93] pod "etcd-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:03.746069  302869 pod_ready.go:82] duration metric: took 507.084418ms for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.746078  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.072306  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 19:05:03.096078  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:03.122027  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:03.150314  303063 provision.go:87] duration metric: took 321.970593ms to configureAuth
	I0920 19:05:03.150345  303063 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:03.150557  303063 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:03.150650  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.153988  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.154472  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.154524  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.154631  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.154840  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.155194  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.155397  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.155741  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:03.155990  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:03.156011  303063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:03.417981  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:03.418020  303063 machine.go:96] duration metric: took 939.078754ms to provisionDockerMachine
	I0920 19:05:03.418038  303063 start.go:293] postStartSetup for "default-k8s-diff-port-612312" (driver="kvm2")
	I0920 19:05:03.418052  303063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:03.418083  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.418456  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:03.418496  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.421689  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.422245  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.422282  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.422539  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.422747  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.422991  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.423144  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.509122  303063 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:03.515233  303063 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:03.515263  303063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:03.515343  303063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:03.515441  303063 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:03.515561  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:03.529346  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:03.559267  303063 start.go:296] duration metric: took 141.209592ms for postStartSetup
	I0920 19:05:03.559320  303063 fix.go:56] duration metric: took 20.196510123s for fixHost
	I0920 19:05:03.559348  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.563599  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.564320  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.564354  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.564605  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.564917  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.565120  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.565379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.565588  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:03.565813  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:03.565827  303063 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:03.674803  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859103.651785276
	
	I0920 19:05:03.674833  303063 fix.go:216] guest clock: 1726859103.651785276
	I0920 19:05:03.674840  303063 fix.go:229] Guest: 2024-09-20 19:05:03.651785276 +0000 UTC Remote: 2024-09-20 19:05:03.559326363 +0000 UTC m=+280.560675514 (delta=92.458913ms)
	I0920 19:05:03.674862  303063 fix.go:200] guest clock delta is within tolerance: 92.458913ms
	I0920 19:05:03.674867  303063 start.go:83] releasing machines lock for "default-k8s-diff-port-612312", held for 20.312097182s
	I0920 19:05:03.674897  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.675183  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:03.677975  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.678374  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.678406  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.678552  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679080  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679255  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679380  303063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:03.679429  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.679442  303063 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:03.679472  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.682443  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.682733  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.682876  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.682902  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.683014  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.683081  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.683104  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.683222  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.683326  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.683440  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.683512  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.683634  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.683721  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.683753  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.766786  303063 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:03.806684  303063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:03.950032  303063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:03.957153  303063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:03.957230  303063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:03.976784  303063 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:03.976814  303063 start.go:495] detecting cgroup driver to use...
	I0920 19:05:03.976902  303063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:03.994391  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:04.009961  303063 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:04.010021  303063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:04.023827  303063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:04.038585  303063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:04.157489  303063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:04.320396  303063 docker.go:233] disabling docker service ...
	I0920 19:05:04.320477  303063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:04.334865  303063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:04.350776  303063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:04.469438  303063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:04.596055  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:04.610548  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:04.629128  303063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:05:04.629192  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.640211  303063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:04.640289  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.650877  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.661863  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.672695  303063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:04.684141  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.696358  303063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.714936  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.726155  303063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:04.737400  303063 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:04.737460  303063 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:04.752752  303063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:04.767664  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:04.892509  303063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:04.992361  303063 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:04.992465  303063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:04.997119  303063 start.go:563] Will wait 60s for crictl version
	I0920 19:05:04.997215  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:05:05.001132  303063 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:05.050835  303063 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:05.050955  303063 ssh_runner.go:195] Run: crio --version
	I0920 19:05:05.079870  303063 ssh_runner.go:195] Run: crio --version
	I0920 19:05:05.112325  303063 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:05:05.113600  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:05.116591  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:05.117037  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:05.117075  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:05.117334  303063 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:05.122086  303063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:05.135489  303063 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-612312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:05.135682  303063 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:05:05.135776  303063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:05.174026  303063 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:05:05.174090  303063 ssh_runner.go:195] Run: which lz4
	I0920 19:05:05.179003  303063 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:05:05.184119  303063 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:05:05.184168  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 19:05:06.479331  303063 crio.go:462] duration metric: took 1.300388015s to copy over tarball
	I0920 19:05:06.479434  303063 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:05:05.040094  303486 main.go:141] libmachine: (old-k8s-version-425599) Waiting to get IP...
	I0920 19:05:05.041198  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.041615  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.041711  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.041616  304380 retry.go:31] will retry after 264.073086ms: waiting for machine to come up
	I0920 19:05:05.307229  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.307761  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.307784  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.307713  304380 retry.go:31] will retry after 317.541552ms: waiting for machine to come up
	I0920 19:05:05.627262  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.627903  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.627929  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.627797  304380 retry.go:31] will retry after 432.236037ms: waiting for machine to come up
	I0920 19:05:06.062368  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:06.062842  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:06.062873  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:06.062804  304380 retry.go:31] will retry after 525.683807ms: waiting for machine to come up
	I0920 19:05:06.590915  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:06.591405  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:06.591434  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:06.591355  304380 retry.go:31] will retry after 542.00244ms: waiting for machine to come up
	I0920 19:05:07.135388  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:07.135944  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:07.135998  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:07.135908  304380 retry.go:31] will retry after 886.798885ms: waiting for machine to come up
	I0920 19:05:08.024147  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:08.024684  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:08.024713  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:08.024596  304380 retry.go:31] will retry after 826.869965ms: waiting for machine to come up
	I0920 19:05:08.853176  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:08.853793  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:08.853828  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:08.853736  304380 retry.go:31] will retry after 1.007422775s: waiting for machine to come up
	I0920 19:05:05.756992  302869 pod_ready.go:103] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:08.255312  302869 pod_ready.go:103] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:08.656490  303063 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1770136s)
	I0920 19:05:08.656529  303063 crio.go:469] duration metric: took 2.177156837s to extract the tarball
	I0920 19:05:08.656539  303063 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:05:08.693153  303063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:08.733444  303063 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:05:08.733473  303063 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:05:08.733484  303063 kubeadm.go:934] updating node { 192.168.50.230 8444 v1.31.1 crio true true} ...
	I0920 19:05:08.733624  303063 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-612312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:05:08.733710  303063 ssh_runner.go:195] Run: crio config
	I0920 19:05:08.777872  303063 cni.go:84] Creating CNI manager for ""
	I0920 19:05:08.777913  303063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:08.777927  303063 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:05:08.777957  303063 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.230 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-612312 NodeName:default-k8s-diff-port-612312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:05:08.778143  303063 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.230
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-612312"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:05:08.778220  303063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:05:08.788133  303063 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:05:08.788208  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:05:08.797461  303063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0920 19:05:08.814111  303063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:05:08.832188  303063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0920 19:05:08.849801  303063 ssh_runner.go:195] Run: grep 192.168.50.230	control-plane.minikube.internal$ /etc/hosts
	I0920 19:05:08.853809  303063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:08.865685  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:08.985881  303063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:09.002387  303063 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312 for IP: 192.168.50.230
	I0920 19:05:09.002417  303063 certs.go:194] generating shared ca certs ...
	I0920 19:05:09.002441  303063 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:09.002656  303063 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:05:09.002727  303063 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:05:09.002741  303063 certs.go:256] generating profile certs ...
	I0920 19:05:09.002859  303063 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/client.key
	I0920 19:05:09.002940  303063 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.key.637d18af
	I0920 19:05:09.002990  303063 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.key
	I0920 19:05:09.003207  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:05:09.003248  303063 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:05:09.003256  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:05:09.003278  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:05:09.003306  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:05:09.003328  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:05:09.003365  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:09.004030  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:05:09.037203  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:05:09.068858  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:05:09.095082  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:05:09.122167  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 19:05:09.147953  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:05:09.174251  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:05:09.202438  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:05:09.231354  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:05:09.256365  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:05:09.282589  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:05:09.308610  303063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:05:09.328798  303063 ssh_runner.go:195] Run: openssl version
	I0920 19:05:09.334685  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:05:09.345947  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.350772  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.350838  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.356595  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:05:09.367559  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:05:09.380638  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.385362  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.385429  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.391299  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:05:09.402065  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:05:09.412841  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.417074  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.417138  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.422761  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:05:09.433780  303063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:05:09.438734  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:05:09.444888  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:05:09.450715  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:05:09.456993  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:05:09.462716  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:05:09.468847  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:05:09.474680  303063 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-612312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:05:09.474780  303063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:05:09.474844  303063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:09.513886  303063 cri.go:89] found id: ""
	I0920 19:05:09.514006  303063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:09.524385  303063 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:05:09.524417  303063 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:05:09.524479  303063 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:05:09.534288  303063 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:05:09.535251  303063 kubeconfig.go:125] found "default-k8s-diff-port-612312" server: "https://192.168.50.230:8444"
	I0920 19:05:09.537293  303063 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:05:09.547753  303063 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.230
	I0920 19:05:09.547796  303063 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:05:09.547812  303063 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:05:09.547890  303063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:09.590656  303063 cri.go:89] found id: ""
	I0920 19:05:09.590743  303063 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:05:09.607426  303063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:05:09.617258  303063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:05:09.617280  303063 kubeadm.go:157] found existing configuration files:
	
	I0920 19:05:09.617344  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 19:05:09.626725  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:05:09.626813  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:05:09.636421  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 19:05:09.645711  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:05:09.645780  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:05:09.655351  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 19:05:09.664771  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:05:09.664833  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:05:09.674556  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 19:05:09.683677  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:05:09.683821  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:05:09.695159  303063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:05:09.704995  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:09.821398  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.642045  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.870266  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.935191  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:11.015669  303063 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:05:11.015787  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:11.516670  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:12.016486  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:12.516070  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:13.016012  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:13.031718  303063 api_server.go:72] duration metric: took 2.016048489s to wait for apiserver process to appear ...
	I0920 19:05:13.031752  303063 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:05:13.031781  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:13.032414  303063 api_server.go:269] stopped: https://192.168.50.230:8444/healthz: Get "https://192.168.50.230:8444/healthz": dial tcp 192.168.50.230:8444: connect: connection refused
	I0920 19:05:09.863227  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:09.863693  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:09.863721  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:09.863640  304380 retry.go:31] will retry after 1.556199895s: waiting for machine to come up
	I0920 19:05:11.422510  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:11.423244  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:11.423271  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:11.423179  304380 retry.go:31] will retry after 1.670177778s: waiting for machine to come up
	I0920 19:05:13.095982  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:13.096600  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:13.096626  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:13.096545  304380 retry.go:31] will retry after 2.71780554s: waiting for machine to come up
	I0920 19:05:10.256325  302869 pod_ready.go:93] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.256352  302869 pod_ready.go:82] duration metric: took 6.510267221s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.256361  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.263229  302869 pod_ready.go:93] pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.263254  302869 pod_ready.go:82] duration metric: took 6.886052ms for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.263264  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xs4nd" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.270014  302869 pod_ready.go:93] pod "kube-proxy-xs4nd" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.270040  302869 pod_ready.go:82] duration metric: took 6.769102ms for pod "kube-proxy-xs4nd" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.270049  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.277232  302869 pod_ready.go:93] pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.277262  302869 pod_ready.go:82] duration metric: took 7.203732ms for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.277275  302869 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:12.284083  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:14.284983  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:13.532830  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:15.579530  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:05:15.579567  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:05:15.579584  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:15.596526  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:05:15.596570  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:05:16.032011  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:16.039310  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:05:16.039346  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:05:16.531881  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:16.536703  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:05:16.536736  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:05:17.032322  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:17.036979  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 200:
	ok
	I0920 19:05:17.043667  303063 api_server.go:141] control plane version: v1.31.1
	I0920 19:05:17.043701  303063 api_server.go:131] duration metric: took 4.011936277s to wait for apiserver health ...
	I0920 19:05:17.043710  303063 cni.go:84] Creating CNI manager for ""
	I0920 19:05:17.043716  303063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:17.045376  303063 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:05:17.046579  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:05:17.056771  303063 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:05:17.076571  303063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:05:17.085546  303063 system_pods.go:59] 8 kube-system pods found
	I0920 19:05:17.085584  303063 system_pods.go:61] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:05:17.085591  303063 system_pods.go:61] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:05:17.085597  303063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:05:17.085608  303063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:05:17.085615  303063 system_pods.go:61] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:05:17.085624  303063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:05:17.085631  303063 system_pods.go:61] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:05:17.085638  303063 system_pods.go:61] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:05:17.085646  303063 system_pods.go:74] duration metric: took 9.051189ms to wait for pod list to return data ...
	I0920 19:05:17.085657  303063 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:05:17.089161  303063 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:05:17.089190  303063 node_conditions.go:123] node cpu capacity is 2
	I0920 19:05:17.089201  303063 node_conditions.go:105] duration metric: took 3.534622ms to run NodePressure ...
	I0920 19:05:17.089218  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:17.442957  303063 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:05:17.447222  303063 kubeadm.go:739] kubelet initialised
	I0920 19:05:17.447247  303063 kubeadm.go:740] duration metric: took 4.255349ms waiting for restarted kubelet to initialise ...
	I0920 19:05:17.447255  303063 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:17.451839  303063 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.457216  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.457240  303063 pod_ready.go:82] duration metric: took 5.361636ms for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.457250  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.457256  303063 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.462245  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.462273  303063 pod_ready.go:82] duration metric: took 5.009342ms for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.462313  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.462326  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.468060  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.468087  303063 pod_ready.go:82] duration metric: took 5.75409ms for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.468099  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.468105  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.479703  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.479727  303063 pod_ready.go:82] duration metric: took 11.614638ms for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.479739  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.479750  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.879555  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-proxy-zp8l5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.879582  303063 pod_ready.go:82] duration metric: took 399.824208ms for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.879592  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-proxy-zp8l5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.879599  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:18.281551  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.281585  303063 pod_ready.go:82] duration metric: took 401.976884ms for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:18.281601  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.281611  303063 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:18.680674  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.680711  303063 pod_ready.go:82] duration metric: took 399.091849ms for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:18.680723  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.680730  303063 pod_ready.go:39] duration metric: took 1.233465539s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:18.680747  303063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:05:18.692948  303063 ops.go:34] apiserver oom_adj: -16
	I0920 19:05:18.692970  303063 kubeadm.go:597] duration metric: took 9.168545987s to restartPrimaryControlPlane
	I0920 19:05:18.692981  303063 kubeadm.go:394] duration metric: took 9.218309896s to StartCluster
	I0920 19:05:18.692999  303063 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:18.693078  303063 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:05:18.694921  303063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:18.695293  303063 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:05:18.696157  303063 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:18.696187  303063 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:05:18.696357  303063 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696377  303063 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.696387  303063 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:05:18.696419  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.696449  303063 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696495  303063 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-612312"
	I0920 19:05:18.696506  303063 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696588  303063 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.696610  303063 addons.go:243] addon metrics-server should already be in state true
	I0920 19:05:18.696709  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.697239  303063 out.go:177] * Verifying Kubernetes components...
	I0920 19:05:18.697334  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697386  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.697409  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697409  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697442  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.697531  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.698927  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:18.713346  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I0920 19:05:18.713346  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35127
	I0920 19:05:18.713967  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.714000  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.714472  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.714491  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.714572  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.714588  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.714961  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.714965  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.715163  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.715842  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.715893  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.717732  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I0920 19:05:18.718289  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.718553  303063 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.718575  303063 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:05:18.718604  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.718827  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.718852  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.718926  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.718956  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.719243  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.719782  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.719826  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.733219  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0920 19:05:18.733789  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.734403  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.734422  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.734463  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41091
	I0920 19:05:18.734905  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.734993  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.735207  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.735363  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.735394  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.735703  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.736264  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.736321  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.737489  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.739977  303063 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:05:18.740477  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39547
	I0920 19:05:18.741217  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.741752  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:05:18.741770  303063 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:05:18.741791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.741854  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.741875  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.742351  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.742547  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.744800  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.746006  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.746416  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.746442  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.746695  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.746961  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.746974  303063 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:15.815519  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:15.816035  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:15.816065  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:15.815974  304380 retry.go:31] will retry after 2.62788631s: waiting for machine to come up
	I0920 19:05:18.446768  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:18.447219  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:18.447240  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:18.447166  304380 retry.go:31] will retry after 4.025841071s: waiting for machine to come up
	I0920 19:05:16.784503  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:18.785829  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:18.747159  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.747332  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.748881  303063 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:05:18.748901  303063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:05:18.748932  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.752335  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.752787  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.752812  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.753180  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.753340  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.753491  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.753628  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.755106  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0920 19:05:18.755543  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.756159  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.756182  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.756521  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.756710  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.758400  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.758674  303063 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:05:18.758690  303063 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:05:18.758707  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.762208  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.762748  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.762776  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.762950  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.763235  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.763518  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.763678  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.900876  303063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:18.919923  303063 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-612312" to be "Ready" ...
	I0920 19:05:18.993779  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:05:18.993814  303063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:05:19.001703  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:05:19.019424  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:05:19.054174  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:05:19.054202  303063 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:05:19.123651  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:05:19.123682  303063 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:05:19.186745  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:05:19.369866  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.369898  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.370210  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.370229  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:19.370246  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.370270  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.370552  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.370593  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:19.370625  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:19.380105  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.380140  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.380456  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.380472  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.145346  303063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.12587258s)
	I0920 19:05:20.145412  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.145427  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.145769  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:20.145834  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.145846  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.145866  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.145877  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.146126  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.146144  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152067  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.152084  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.152361  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.152379  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152388  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.152395  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.152625  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.152662  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:20.152711  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152729  303063 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-612312"
	I0920 19:05:20.154940  303063 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 19:05:20.156326  303063 addons.go:510] duration metric: took 1.460148296s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 19:05:20.923687  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:22.924271  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:23.791151  302538 start.go:364] duration metric: took 54.811585482s to acquireMachinesLock for "no-preload-037711"
	I0920 19:05:23.791208  302538 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:05:23.791219  302538 fix.go:54] fixHost starting: 
	I0920 19:05:23.791657  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:23.791696  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:23.809350  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0920 19:05:23.809873  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:23.810520  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:05:23.810555  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:23.810893  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:23.811118  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:23.811286  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:05:23.812885  302538 fix.go:112] recreateIfNeeded on no-preload-037711: state=Stopped err=<nil>
	I0920 19:05:23.812914  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	W0920 19:05:23.813135  302538 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:05:23.815287  302538 out.go:177] * Restarting existing kvm2 VM for "no-preload-037711" ...
	I0920 19:05:22.477850  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.478419  303486 main.go:141] libmachine: (old-k8s-version-425599) Found IP for machine: 192.168.39.53
	I0920 19:05:22.478454  303486 main.go:141] libmachine: (old-k8s-version-425599) Reserving static IP address...
	I0920 19:05:22.478473  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has current primary IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.478983  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "old-k8s-version-425599", mac: "52:54:00:d2:a5:70", ip: "192.168.39.53"} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.479021  303486 main.go:141] libmachine: (old-k8s-version-425599) Reserved static IP address: 192.168.39.53
	I0920 19:05:22.479040  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | skip adding static IP to network mk-old-k8s-version-425599 - found existing host DHCP lease matching {name: "old-k8s-version-425599", mac: "52:54:00:d2:a5:70", ip: "192.168.39.53"}
	I0920 19:05:22.479055  303486 main.go:141] libmachine: (old-k8s-version-425599) Waiting for SSH to be available...
	I0920 19:05:22.479067  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Getting to WaitForSSH function...
	I0920 19:05:22.481118  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.481359  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.481382  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.481556  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH client type: external
	I0920 19:05:22.481570  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa (-rw-------)
	I0920 19:05:22.481600  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:22.481612  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | About to run SSH command:
	I0920 19:05:22.481627  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | exit 0
	I0920 19:05:22.606383  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:22.606783  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetConfigRaw
	I0920 19:05:22.607408  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:22.610155  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.610474  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.610506  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.610784  303486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 19:05:22.611075  303486 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:22.611103  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:22.611332  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.613838  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.614250  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.614283  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.614395  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.614609  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.614776  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.614950  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.615136  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.615331  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.615344  303486 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:22.718330  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:22.718363  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.718651  303486 buildroot.go:166] provisioning hostname "old-k8s-version-425599"
	I0920 19:05:22.718697  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.718913  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.722027  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.722334  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.722370  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.722559  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.722738  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.722909  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.723086  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.723261  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.723473  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.723491  303486 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-425599 && echo "old-k8s-version-425599" | sudo tee /etc/hostname
	I0920 19:05:22.841563  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-425599
	
	I0920 19:05:22.841592  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.844327  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.844716  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.844748  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.844970  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.845154  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.845306  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.845413  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.845570  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.845793  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.845818  303486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-425599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-425599/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-425599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:22.959542  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:22.959572  303486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:22.959615  303486 buildroot.go:174] setting up certificates
	I0920 19:05:22.959625  303486 provision.go:84] configureAuth start
	I0920 19:05:22.959635  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.959972  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:22.962506  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.962845  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.962883  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.963020  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.965352  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.965734  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.965755  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.965936  303486 provision.go:143] copyHostCerts
	I0920 19:05:22.965999  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:22.966018  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:22.966073  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:22.966165  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:22.966173  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:22.966193  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:22.966250  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:22.966257  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:22.966274  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:22.966368  303486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-425599 san=[127.0.0.1 192.168.39.53 localhost minikube old-k8s-version-425599]
	I0920 19:05:23.156245  303486 provision.go:177] copyRemoteCerts
	I0920 19:05:23.156322  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:23.156356  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.159694  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.160062  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.160105  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.160283  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.160467  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.160633  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.160755  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.244439  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:23.271796  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 19:05:23.298124  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:23.323466  303486 provision.go:87] duration metric: took 363.82725ms to configureAuth
	I0920 19:05:23.323496  303486 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:23.323711  303486 config.go:182] Loaded profile config "old-k8s-version-425599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:05:23.323805  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.326985  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.327336  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.327363  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.327573  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.327788  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.328003  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.328161  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.328315  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:23.328492  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:23.328506  303486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:23.559721  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:23.559755  303486 machine.go:96] duration metric: took 948.663189ms to provisionDockerMachine
	I0920 19:05:23.559770  303486 start.go:293] postStartSetup for "old-k8s-version-425599" (driver="kvm2")
	I0920 19:05:23.559781  303486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:23.559812  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.560186  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:23.560225  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.563146  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.563462  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.563491  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.563786  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.564018  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.564214  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.564365  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.645013  303486 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:23.649198  303486 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:23.649230  303486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:23.649300  303486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:23.649416  303486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:23.649544  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:23.659351  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:23.683405  303486 start.go:296] duration metric: took 123.617289ms for postStartSetup
	I0920 19:05:23.683466  303486 fix.go:56] duration metric: took 20.008417985s for fixHost
	I0920 19:05:23.683495  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.686540  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.686962  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.686988  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.687209  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.687445  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.687624  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.687803  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.688001  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:23.688188  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:23.688206  303486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:23.790992  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859123.767729644
	
	I0920 19:05:23.791024  303486 fix.go:216] guest clock: 1726859123.767729644
	I0920 19:05:23.791035  303486 fix.go:229] Guest: 2024-09-20 19:05:23.767729644 +0000 UTC Remote: 2024-09-20 19:05:23.683472425 +0000 UTC m=+234.770765310 (delta=84.257219ms)
	I0920 19:05:23.791061  303486 fix.go:200] guest clock delta is within tolerance: 84.257219ms
	I0920 19:05:23.791068  303486 start.go:83] releasing machines lock for "old-k8s-version-425599", held for 20.116056408s
	I0920 19:05:23.791101  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.791432  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:23.794540  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.795015  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.795048  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.795226  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.795779  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.795992  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.796129  303486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:23.796180  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.796241  303486 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:23.796265  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.799032  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799374  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.799399  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799418  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799540  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.799743  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.799874  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.799890  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.799906  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.800084  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.800077  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.800198  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.800365  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.800514  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.924885  303486 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:23.932642  303486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:21.284671  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:23.284813  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:24.083860  303486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:24.090360  303486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:24.090444  303486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:24.112281  303486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:24.112310  303486 start.go:495] detecting cgroup driver to use...
	I0920 19:05:24.112383  303486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:24.136600  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:24.154552  303486 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:24.154631  303486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:24.170600  303486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:24.186071  303486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:24.319752  303486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:24.498299  303486 docker.go:233] disabling docker service ...
	I0920 19:05:24.498385  303486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:24.515762  303486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:24.533482  303486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:24.687481  303486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:24.820191  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:24.835255  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:24.856179  303486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 19:05:24.856253  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.868991  303486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:24.869080  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.884074  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.898732  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.911016  303486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:24.922757  303486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:24.937719  303486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:24.937828  303486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:24.955496  303486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:24.966347  303486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:25.114758  303486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:25.226807  303486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:25.226984  303486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:25.234576  303486 start.go:563] Will wait 60s for crictl version
	I0920 19:05:25.234664  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:25.238739  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:25.282242  303486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:25.282344  303486 ssh_runner.go:195] Run: crio --version
	I0920 19:05:25.317733  303486 ssh_runner.go:195] Run: crio --version
	I0920 19:05:25.353767  303486 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 19:05:23.816707  302538 main.go:141] libmachine: (no-preload-037711) Calling .Start
	I0920 19:05:23.817003  302538 main.go:141] libmachine: (no-preload-037711) Ensuring networks are active...
	I0920 19:05:23.817953  302538 main.go:141] libmachine: (no-preload-037711) Ensuring network default is active
	I0920 19:05:23.818345  302538 main.go:141] libmachine: (no-preload-037711) Ensuring network mk-no-preload-037711 is active
	I0920 19:05:23.818824  302538 main.go:141] libmachine: (no-preload-037711) Getting domain xml...
	I0920 19:05:23.819705  302538 main.go:141] libmachine: (no-preload-037711) Creating domain...
	I0920 19:05:25.216298  302538 main.go:141] libmachine: (no-preload-037711) Waiting to get IP...
	I0920 19:05:25.217452  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.218073  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.218138  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.218047  304582 retry.go:31] will retry after 256.299732ms: waiting for machine to come up
	I0920 19:05:25.475745  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.476451  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.476485  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.476388  304582 retry.go:31] will retry after 298.732749ms: waiting for machine to come up
	I0920 19:05:25.777093  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.777731  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.777755  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.777701  304582 retry.go:31] will retry after 360.011383ms: waiting for machine to come up
	I0920 19:05:26.139480  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:26.140100  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:26.140132  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:26.140049  304582 retry.go:31] will retry after 593.756705ms: waiting for machine to come up
	I0920 19:05:24.924455  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:26.425132  303063 node_ready.go:49] node "default-k8s-diff-port-612312" has status "Ready":"True"
	I0920 19:05:26.425165  303063 node_ready.go:38] duration metric: took 7.505210484s for node "default-k8s-diff-port-612312" to be "Ready" ...
	I0920 19:05:26.425181  303063 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:26.433394  303063 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:26.440462  303063 pod_ready.go:93] pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:26.440497  303063 pod_ready.go:82] duration metric: took 7.072952ms for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:26.440513  303063 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:25.354959  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:25.358179  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:25.358467  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:25.358495  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:25.358739  303486 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:25.362714  303486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:25.375880  303486 kubeadm.go:883] updating cluster {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:25.376024  303486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:05:25.376074  303486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:25.420224  303486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:05:25.420307  303486 ssh_runner.go:195] Run: which lz4
	I0920 19:05:25.424775  303486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:05:25.430102  303486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:05:25.430151  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 19:05:27.014068  303486 crio.go:462] duration metric: took 1.589333502s to copy over tarball
	I0920 19:05:27.014160  303486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:05:25.786282  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:27.788058  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:26.735924  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:26.736558  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:26.736582  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:26.736458  304582 retry.go:31] will retry after 712.118443ms: waiting for machine to come up
	I0920 19:05:27.450059  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:27.450696  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:27.450719  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:27.450592  304582 retry.go:31] will retry after 588.649809ms: waiting for machine to come up
	I0920 19:05:28.041216  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:28.041760  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:28.041791  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:28.041691  304582 retry.go:31] will retry after 869.42079ms: waiting for machine to come up
	I0920 19:05:28.912809  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:28.913240  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:28.913265  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:28.913174  304582 retry.go:31] will retry after 1.410011475s: waiting for machine to come up
	I0920 19:05:30.324367  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:30.324952  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:30.324980  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:30.324875  304582 retry.go:31] will retry after 1.398358739s: waiting for machine to come up
	I0920 19:05:28.454512  303063 pod_ready.go:103] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:31.546557  303063 pod_ready.go:103] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:32.072690  303063 pod_ready.go:93] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.072719  303063 pod_ready.go:82] duration metric: took 5.632196538s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.072734  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.081029  303063 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.081062  303063 pod_ready.go:82] duration metric: took 8.319382ms for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.081076  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.087314  303063 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.087338  303063 pod_ready.go:82] duration metric: took 6.253184ms for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.087351  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.093286  303063 pod_ready.go:93] pod "kube-proxy-zp8l5" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.093313  303063 pod_ready.go:82] duration metric: took 5.953425ms for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.093326  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.098529  303063 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.098553  303063 pod_ready.go:82] duration metric: took 5.218413ms for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.098565  303063 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:30.096727  303486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.082523066s)
	I0920 19:05:30.096778  303486 crio.go:469] duration metric: took 3.082671461s to extract the tarball
	I0920 19:05:30.096789  303486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:05:30.148059  303486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:30.184547  303486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:05:30.184578  303486 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 19:05:30.184672  303486 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:30.184686  303486 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.184711  303486 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.184730  303486 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.184732  303486 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.184686  303486 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 19:05:30.184693  303486 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.184792  303486 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.186558  303486 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:30.186609  303486 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 19:05:30.186607  303486 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.186616  303486 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.186688  303486 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.186698  303486 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.186701  303486 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.186565  303486 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.425283  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 19:05:30.469378  303486 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 19:05:30.469448  303486 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 19:05:30.469514  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.475453  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.493250  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.505003  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.513203  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.514365  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.521729  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.533265  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.580710  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.613984  303486 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 19:05:30.614033  303486 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.614085  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.653094  303486 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 19:05:30.653150  303486 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.653205  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675697  303486 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 19:05:30.675730  303486 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 19:05:30.675752  303486 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.675762  303486 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.675805  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675820  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675805  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.709199  303486 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 19:05:30.709261  303486 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.709310  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.720146  303486 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 19:05:30.720198  303486 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.720233  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.720313  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.720241  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.720374  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.720247  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.737444  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.737487  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 19:05:30.843272  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.843362  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.843366  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.860414  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.860462  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.860430  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.954641  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.982227  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.982263  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:31.041996  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:31.042032  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:31.042650  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:31.042722  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:31.070786  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 19:05:31.120407  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 19:05:31.135751  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 19:05:31.163591  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 19:05:31.164483  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 19:05:31.164587  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 19:05:31.345957  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:31.486337  303486 cache_images.go:92] duration metric: took 1.301737533s to LoadCachedImages
	W0920 19:05:31.486434  303486 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 19:05:31.486452  303486 kubeadm.go:934] updating node { 192.168.39.53 8443 v1.20.0 crio true true} ...
	I0920 19:05:31.486576  303486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-425599 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:05:31.486661  303486 ssh_runner.go:195] Run: crio config
	I0920 19:05:31.544181  303486 cni.go:84] Creating CNI manager for ""
	I0920 19:05:31.544215  303486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:31.544229  303486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:05:31.544257  303486 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-425599 NodeName:old-k8s-version-425599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 19:05:31.544465  303486 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-425599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:05:31.544556  303486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 19:05:31.559445  303486 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:05:31.559542  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:05:31.570446  303486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0920 19:05:31.588741  303486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:05:31.606454  303486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0920 19:05:31.624483  303486 ssh_runner.go:195] Run: grep 192.168.39.53	control-plane.minikube.internal$ /etc/hosts
	I0920 19:05:31.628285  303486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:31.641039  303486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:31.771690  303486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:31.789746  303486 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599 for IP: 192.168.39.53
	I0920 19:05:31.789775  303486 certs.go:194] generating shared ca certs ...
	I0920 19:05:31.789806  303486 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:31.790074  303486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:05:31.790150  303486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:05:31.790165  303486 certs.go:256] generating profile certs ...
	I0920 19:05:31.798117  303486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/client.key
	I0920 19:05:31.798270  303486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key.e78cb154
	I0920 19:05:31.798333  303486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key
	I0920 19:05:31.798499  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:05:31.798543  303486 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:05:31.798557  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:05:31.798608  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:05:31.798659  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:05:31.798692  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:05:31.798748  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:31.799624  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:05:31.843298  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:05:31.877299  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:05:31.909777  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:05:31.947787  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 19:05:31.991175  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:05:32.019393  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:05:32.048475  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:05:32.084354  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:05:32.112161  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:05:32.138991  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:05:32.167653  303486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:05:32.185485  303486 ssh_runner.go:195] Run: openssl version
	I0920 19:05:32.192030  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:05:32.203761  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.209550  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.209650  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.216277  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:05:32.228192  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:05:32.239984  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.244782  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.244848  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.250865  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:05:32.262035  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:05:32.273790  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.279335  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.279414  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.286501  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:05:32.298052  303486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:05:32.303064  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:05:32.309973  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:05:32.316704  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:05:32.323166  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:05:32.330126  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:05:32.336554  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:05:32.343303  303486 kubeadm.go:392] StartCluster: {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:05:32.343413  303486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:05:32.343473  303486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:32.387562  303486 cri.go:89] found id: ""
	I0920 19:05:32.387653  303486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:32.398143  303486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:05:32.398167  303486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:05:32.398222  303486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:05:32.407776  303486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:05:32.409205  303486 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-425599" does not appear in /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:05:32.410267  303486 kubeconfig.go:62] /home/jenkins/minikube-integration/19679-237658/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-425599" cluster setting kubeconfig missing "old-k8s-version-425599" context setting]
	I0920 19:05:32.411776  303486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:32.457074  303486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:05:32.468055  303486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.53
	I0920 19:05:32.468113  303486 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:05:32.468132  303486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:05:32.468211  303486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:32.505151  303486 cri.go:89] found id: ""
	I0920 19:05:32.505241  303486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:05:32.521391  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:05:32.531705  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:05:32.531728  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:05:32.531774  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:05:32.541137  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:05:32.541219  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:05:32.550684  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:05:32.560262  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:05:32.560352  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:05:32.569735  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:05:32.579126  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:05:32.579199  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:05:32.589508  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:05:32.600985  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:05:32.601100  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:05:32.611511  303486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:05:32.622346  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:32.755562  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:33.793472  303486 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.037864747s)
	I0920 19:05:33.793513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:30.283826  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:32.285077  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:31.725721  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:31.726171  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:31.726198  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:31.726127  304582 retry.go:31] will retry after 2.32427136s: waiting for machine to come up
	I0920 19:05:34.052412  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:34.053005  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:34.053043  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:34.052923  304582 retry.go:31] will retry after 2.159036217s: waiting for machine to come up
	I0920 19:05:36.215059  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:36.215561  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:36.215585  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:36.215501  304582 retry.go:31] will retry after 3.424610182s: waiting for machine to come up
	I0920 19:05:34.105780  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:36.106491  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:34.021260  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:34.142176  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:34.235507  303486 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:05:34.235618  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:34.736586  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:35.236065  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:35.735783  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:36.236406  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:36.736243  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:37.235994  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:37.736168  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:38.236559  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:38.736139  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:34.784743  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:37.282598  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.284890  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.642163  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:39.642600  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:39.642642  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:39.642541  304582 retry.go:31] will retry after 3.073679854s: waiting for machine to come up
	I0920 19:05:38.116192  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:40.605958  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.236010  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:39.735723  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:40.236003  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:40.735741  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.235689  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.736411  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:42.236028  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:42.735814  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:43.236391  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:43.736174  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.783707  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:43.784197  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:42.719195  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.719748  302538 main.go:141] libmachine: (no-preload-037711) Found IP for machine: 192.168.61.136
	I0920 19:05:42.719775  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has current primary IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.719780  302538 main.go:141] libmachine: (no-preload-037711) Reserving static IP address...
	I0920 19:05:42.720201  302538 main.go:141] libmachine: (no-preload-037711) Reserved static IP address: 192.168.61.136
	I0920 19:05:42.720220  302538 main.go:141] libmachine: (no-preload-037711) Waiting for SSH to be available...
	I0920 19:05:42.720239  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "no-preload-037711", mac: "52:54:00:b0:ac:14", ip: "192.168.61.136"} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.720268  302538 main.go:141] libmachine: (no-preload-037711) DBG | skip adding static IP to network mk-no-preload-037711 - found existing host DHCP lease matching {name: "no-preload-037711", mac: "52:54:00:b0:ac:14", ip: "192.168.61.136"}
	I0920 19:05:42.720280  302538 main.go:141] libmachine: (no-preload-037711) DBG | Getting to WaitForSSH function...
	I0920 19:05:42.722402  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.722661  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.722686  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.722864  302538 main.go:141] libmachine: (no-preload-037711) DBG | Using SSH client type: external
	I0920 19:05:42.722877  302538 main.go:141] libmachine: (no-preload-037711) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa (-rw-------)
	I0920 19:05:42.722939  302538 main.go:141] libmachine: (no-preload-037711) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:42.722962  302538 main.go:141] libmachine: (no-preload-037711) DBG | About to run SSH command:
	I0920 19:05:42.722979  302538 main.go:141] libmachine: (no-preload-037711) DBG | exit 0
	I0920 19:05:42.850057  302538 main.go:141] libmachine: (no-preload-037711) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:42.850451  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetConfigRaw
	I0920 19:05:42.851176  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:42.853807  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.854268  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.854290  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.854558  302538 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/config.json ...
	I0920 19:05:42.854764  302538 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:42.854782  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:42.854999  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:42.857347  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.857683  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.857712  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.857892  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:42.858073  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.858242  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.858385  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:42.858569  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:42.858755  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:42.858766  302538 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:42.962098  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:42.962137  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:42.962455  302538 buildroot.go:166] provisioning hostname "no-preload-037711"
	I0920 19:05:42.962488  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:42.962696  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:42.965410  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.965793  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.965822  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.965954  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:42.966128  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.966285  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.966442  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:42.966650  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:42.966822  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:42.966847  302538 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-037711 && echo "no-preload-037711" | sudo tee /etc/hostname
	I0920 19:05:43.089291  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-037711
	
	I0920 19:05:43.089338  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.092213  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.092658  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.092689  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.092828  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.093031  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.093188  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.093305  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.093478  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.093692  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.093719  302538 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-037711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-037711/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-037711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:43.210625  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:43.210660  302538 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:43.210720  302538 buildroot.go:174] setting up certificates
	I0920 19:05:43.210740  302538 provision.go:84] configureAuth start
	I0920 19:05:43.210758  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:43.211093  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:43.213829  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.214346  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.214379  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.214542  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.216979  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.217294  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.217319  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.217461  302538 provision.go:143] copyHostCerts
	I0920 19:05:43.217526  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:43.217546  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:43.217610  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:43.217708  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:43.217720  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:43.217750  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:43.217885  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:43.217899  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:43.217947  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:43.218008  302538 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.no-preload-037711 san=[127.0.0.1 192.168.61.136 localhost minikube no-preload-037711]
	I0920 19:05:43.395507  302538 provision.go:177] copyRemoteCerts
	I0920 19:05:43.395576  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:43.395607  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.398288  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.398663  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.398694  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.398899  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.399087  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.399205  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.399324  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:43.488543  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 19:05:43.514793  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:43.537520  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:43.561983  302538 provision.go:87] duration metric: took 351.22541ms to configureAuth
	I0920 19:05:43.562021  302538 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:43.562213  302538 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:43.562292  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.565776  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.566235  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.566270  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.566486  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.566706  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.566895  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.567043  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.567251  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.567439  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.567454  302538 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:43.797110  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:43.797142  302538 machine.go:96] duration metric: took 942.364782ms to provisionDockerMachine
	I0920 19:05:43.797157  302538 start.go:293] postStartSetup for "no-preload-037711" (driver="kvm2")
	I0920 19:05:43.797171  302538 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:43.797193  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:43.797516  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:43.797546  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.800148  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.800532  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.800559  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.800794  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.800993  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.801158  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.801255  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:43.885788  302538 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:43.890070  302538 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:43.890108  302538 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:43.890198  302538 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:43.890293  302538 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:43.890405  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:43.899679  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:43.924928  302538 start.go:296] duration metric: took 127.752462ms for postStartSetup
	I0920 19:05:43.924973  302538 fix.go:56] duration metric: took 20.133755115s for fixHost
	I0920 19:05:43.924996  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.927678  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.928059  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.928099  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.928277  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.928517  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.928685  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.928815  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.928979  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.929190  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.929204  302538 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:44.042745  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859144.016675004
	
	I0920 19:05:44.042769  302538 fix.go:216] guest clock: 1726859144.016675004
	I0920 19:05:44.042776  302538 fix.go:229] Guest: 2024-09-20 19:05:44.016675004 +0000 UTC Remote: 2024-09-20 19:05:43.924977449 +0000 UTC m=+357.534412233 (delta=91.697555ms)
	I0920 19:05:44.042804  302538 fix.go:200] guest clock delta is within tolerance: 91.697555ms
	I0920 19:05:44.042819  302538 start.go:83] releasing machines lock for "no-preload-037711", held for 20.251627041s
	I0920 19:05:44.042842  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.043134  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:44.046071  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.046412  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.046440  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.046613  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047113  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047278  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047366  302538 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:44.047428  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:44.047520  302538 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:44.047548  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:44.050275  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050358  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050849  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.050872  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.050892  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050915  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.051095  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:44.051259  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:44.051259  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:44.051496  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:44.051637  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:44.051655  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:44.051789  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:44.051953  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:44.134420  302538 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:44.175303  302538 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:44.319129  302538 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:44.325894  302538 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:44.325975  302538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:44.341779  302538 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:44.341809  302538 start.go:495] detecting cgroup driver to use...
	I0920 19:05:44.341899  302538 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:44.358211  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:44.373240  302538 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:44.373327  302538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:44.387429  302538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:44.401684  302538 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:44.521292  302538 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:44.668050  302538 docker.go:233] disabling docker service ...
	I0920 19:05:44.668124  302538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:44.683196  302538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:44.696604  302538 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:44.843581  302538 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:44.959377  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:44.973472  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:44.991282  302538 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:05:44.991344  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.001696  302538 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:45.001776  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.012684  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.023288  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.034330  302538 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:45.045773  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.056332  302538 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.074730  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.085656  302538 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:45.096371  302538 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:45.096447  302538 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:45.112094  302538 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:45.123050  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:45.236136  302538 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:45.325978  302538 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:45.326065  302538 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:45.330452  302538 start.go:563] Will wait 60s for crictl version
	I0920 19:05:45.330527  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.334010  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:45.373622  302538 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:45.373736  302538 ssh_runner.go:195] Run: crio --version
	I0920 19:05:45.401279  302538 ssh_runner.go:195] Run: crio --version
	I0920 19:05:45.430445  302538 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:05:45.431717  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:45.434768  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:45.435094  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:45.435121  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:45.435335  302538 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:45.439275  302538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:45.451300  302538 kubeadm.go:883] updating cluster {Name:no-preload-037711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:45.451461  302538 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:05:45.451502  302538 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:45.485045  302538 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:05:45.485073  302538 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 19:05:45.485130  302538 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:45.485150  302538 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.485168  302538 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.485182  302538 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.485231  302538 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.485171  302538 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.485305  302538 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 19:05:45.485450  302538 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.486694  302538 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.486700  302538 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.486808  302538 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.486808  302538 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 19:05:45.486829  302538 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.486894  302538 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:45.486829  302538 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.487055  302538 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.708911  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 19:05:45.773014  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.815176  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.818274  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.818298  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.829644  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.850791  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.862553  302538 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 19:05:45.862616  302538 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.862680  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.907516  302538 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 19:05:45.907573  302538 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.907629  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.938640  302538 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 19:05:45.938715  302538 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.938755  302538 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 19:05:45.938799  302538 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.938845  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.938770  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.947658  302538 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 19:05:45.947706  302538 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.947757  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.965105  302538 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 19:05:45.965161  302538 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.965166  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.965191  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.965248  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.965282  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.965344  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.965350  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.044513  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.044640  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:46.077894  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:46.080113  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:46.080170  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:46.080239  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.155137  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.155188  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:46.208431  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:46.208477  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:46.208521  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.208565  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:46.290657  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.290694  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 19:05:46.290794  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.325206  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 19:05:46.325353  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:46.353181  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 19:05:46.353289  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 19:05:46.353307  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 19:05:46.353312  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:46.353331  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 19:05:46.353383  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:46.353418  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:46.353384  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.353512  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.379873  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 19:05:46.379934  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 19:05:46.379979  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 19:05:46.380024  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 19:05:46.379981  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 19:05:46.380134  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:43.105005  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:45.105781  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:47.604822  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:44.235886  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:44.736349  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.235783  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.736619  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:46.236082  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:46.736609  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:47.236078  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:47.736130  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:48.236218  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:48.735858  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.784555  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:47.785125  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:46.622278  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:48.339532  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.985991382s)
	I0920 19:05:48.339568  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 19:05:48.339594  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:48.339653  302538 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.959488823s)
	I0920 19:05:48.339685  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 19:05:48.339665  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:48.339742  302538 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.717432253s)
	I0920 19:05:48.339787  302538 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 19:05:48.339815  302538 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:48.339842  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:48.343725  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:50.823508  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.483779728s)
	I0920 19:05:50.823559  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.479795238s)
	I0920 19:05:50.823593  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 19:05:50.823637  302538 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:50.823649  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:50.823692  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:49.607326  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:51.609055  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:49.236645  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:49.736183  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.236642  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.735713  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:51.235862  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:51.736479  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:52.235726  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:52.735939  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:53.235759  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:53.736290  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.284090  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:52.284996  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:54.127303  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.303601736s)
	I0920 19:05:54.127415  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:54.127327  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.303608969s)
	I0920 19:05:54.127455  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 19:05:54.127488  302538 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:54.127530  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:56.202021  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.074563861s)
	I0920 19:05:56.202050  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.074501802s)
	I0920 19:05:56.202076  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 19:05:56.202095  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 19:05:56.202118  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:56.202184  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:56.202202  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:05:56.207141  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 19:05:54.104909  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:56.105373  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:54.235840  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:54.735817  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:55.235812  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:55.736410  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:56.236203  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:56.735713  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:57.235777  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:57.735835  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:58.236448  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:58.736010  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:54.783661  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:56.784770  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:58.785122  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:58.166303  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.964088667s)
	I0920 19:05:58.166340  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 19:05:58.166369  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:58.166424  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:59.625258  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.458808535s)
	I0920 19:05:59.625294  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 19:05:59.625318  302538 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:05:59.625361  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:06:00.572722  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 19:06:00.572768  302538 cache_images.go:123] Successfully loaded all cached images
	I0920 19:06:00.572774  302538 cache_images.go:92] duration metric: took 15.087689513s to LoadCachedImages
	I0920 19:06:00.572788  302538 kubeadm.go:934] updating node { 192.168.61.136 8443 v1.31.1 crio true true} ...
	I0920 19:06:00.572917  302538 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-037711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:06:00.572994  302538 ssh_runner.go:195] Run: crio config
	I0920 19:06:00.619832  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:06:00.619861  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:06:00.619875  302538 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:06:00.619910  302538 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-037711 NodeName:no-preload-037711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:06:00.620110  302538 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-037711"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:06:00.620181  302538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:06:00.630434  302538 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:06:00.630513  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:06:00.639447  302538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 19:06:00.656195  302538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:06:00.675718  302538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0920 19:06:00.709191  302538 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0920 19:06:00.713271  302538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:06:00.726826  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:06:00.850927  302538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:06:00.869014  302538 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711 for IP: 192.168.61.136
	I0920 19:06:00.869044  302538 certs.go:194] generating shared ca certs ...
	I0920 19:06:00.869109  302538 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:06:00.869331  302538 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:06:00.869393  302538 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:06:00.869405  302538 certs.go:256] generating profile certs ...
	I0920 19:06:00.869507  302538 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/client.key
	I0920 19:06:00.869589  302538 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.key.b5da98fb
	I0920 19:06:00.869654  302538 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.key
	I0920 19:06:00.869831  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:06:00.869877  302538 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:06:00.869890  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:06:00.869947  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:06:00.869981  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:06:00.870010  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:06:00.870068  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:06:00.870802  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:06:00.922699  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:06:00.953401  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:06:00.996889  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:06:01.024682  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 19:06:01.050412  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:06:01.081212  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:06:01.108337  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:06:01.133628  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:06:01.158805  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:06:01.186888  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:06:01.211771  302538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:06:01.229448  302538 ssh_runner.go:195] Run: openssl version
	I0920 19:06:01.235289  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:06:01.246775  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.251410  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.251472  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.257271  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:06:01.268229  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:06:01.280431  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.285643  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.285736  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.291772  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:06:01.302858  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:06:01.314034  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.319160  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.319235  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.325450  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:06:01.336803  302538 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:06:01.341439  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:06:01.347592  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:06:01.354109  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:06:01.360513  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:06:01.366749  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:06:01.372898  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:06:01.379101  302538 kubeadm.go:392] StartCluster: {Name:no-preload-037711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:06:01.379228  302538 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:06:01.379280  302538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:06:01.416896  302538 cri.go:89] found id: ""
	I0920 19:06:01.416972  302538 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:58.606203  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:00.606802  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:59.236283  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:59.736440  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:00.236142  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:00.735772  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.236360  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.736388  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:02.236462  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:02.736742  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.236457  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.736705  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.284596  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:03.784495  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:01.428611  302538 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:06:01.428636  302538 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:06:01.428685  302538 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:06:01.439392  302538 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:06:01.440512  302538 kubeconfig.go:125] found "no-preload-037711" server: "https://192.168.61.136:8443"
	I0920 19:06:01.442938  302538 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:06:01.452938  302538 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.136
	I0920 19:06:01.452982  302538 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:06:01.452999  302538 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:06:01.453062  302538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:06:01.487878  302538 cri.go:89] found id: ""
	I0920 19:06:01.487967  302538 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:06:01.506032  302538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:06:01.516536  302538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:06:01.516562  302538 kubeadm.go:157] found existing configuration files:
	
	I0920 19:06:01.516609  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:06:01.526718  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:06:01.526790  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:06:01.536809  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:06:01.546172  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:06:01.546243  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:06:01.556211  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:06:01.565796  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:06:01.565869  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:06:01.577089  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:06:01.587862  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:06:01.587985  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:06:01.598666  302538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:06:01.610018  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:01.740046  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.566817  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.784258  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.848752  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.933469  302538 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:06:02.933579  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.434385  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.933975  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.962422  302538 api_server.go:72] duration metric: took 1.028951755s to wait for apiserver process to appear ...
	I0920 19:06:03.962453  302538 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:06:03.962485  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:03.963119  302538 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": dial tcp 192.168.61.136:8443: connect: connection refused
	I0920 19:06:04.462843  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.443140  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:06:06.443178  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:06:06.443196  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.485554  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:06:06.485597  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:06:06.485614  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.566023  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:06.566068  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:06.963116  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.972764  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:06.972804  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:07.463432  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:07.470963  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:07.471000  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:07.962553  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:07.967724  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0920 19:06:07.975215  302538 api_server.go:141] control plane version: v1.31.1
	I0920 19:06:07.975248  302538 api_server.go:131] duration metric: took 4.01278814s to wait for apiserver health ...
	I0920 19:06:07.975258  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:06:07.975267  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:06:07.977455  302538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:06:03.106079  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:05.609475  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:04.236005  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:04.735854  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:05.236716  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:05.736668  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.235839  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.736412  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:07.236224  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:07.735830  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:08.235800  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:08.736645  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.284930  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:08.784854  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:07.979099  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:06:07.991210  302538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:06:08.016110  302538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:06:08.031124  302538 system_pods.go:59] 8 kube-system pods found
	I0920 19:06:08.031177  302538 system_pods.go:61] "coredns-7c65d6cfc9-8gmsq" [91d89ad2-f899-464c-b351-a0773c16223b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:06:08.031191  302538 system_pods.go:61] "etcd-no-preload-037711" [5b353ad3-0389-4e3d-b5c3-2f2bc65db200] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:06:08.031203  302538 system_pods.go:61] "kube-apiserver-no-preload-037711" [b19002c7-f891-4bc1-a2f0-0f6beebb3987] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:06:08.031247  302538 system_pods.go:61] "kube-controller-manager-no-preload-037711" [a5b1951d-7189-4ee3-bc28-bed058048ebb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:06:08.031262  302538 system_pods.go:61] "kube-proxy-zzmkv" [c8f4695b-eefd-407a-9b7c-d5078632d120] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:06:08.031270  302538 system_pods.go:61] "kube-scheduler-no-preload-037711" [b44824ba-52ad-4e86-9408-118f0e1852d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:06:08.031280  302538 system_pods.go:61] "metrics-server-6867b74b74-7xpgm" [f6280d56-5be4-475f-91da-2862e992868f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:06:08.031290  302538 system_pods.go:61] "storage-provisioner" [d1efb64f-d2a9-4bb4-9bc3-c643c415fcf2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:06:08.031300  302538 system_pods.go:74] duration metric: took 15.160935ms to wait for pod list to return data ...
	I0920 19:06:08.031310  302538 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:06:08.035903  302538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:06:08.035953  302538 node_conditions.go:123] node cpu capacity is 2
	I0920 19:06:08.035968  302538 node_conditions.go:105] duration metric: took 4.652846ms to run NodePressure ...
	I0920 19:06:08.035995  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:08.404721  302538 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:06:08.409400  302538 kubeadm.go:739] kubelet initialised
	I0920 19:06:08.409423  302538 kubeadm.go:740] duration metric: took 4.670172ms waiting for restarted kubelet to initialise ...
	I0920 19:06:08.409432  302538 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:06:08.416547  302538 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:10.426817  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:08.107050  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:10.606744  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:09.236127  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:09.735809  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.236585  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.735863  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:11.236700  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:11.736557  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:12.236483  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:12.735695  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:13.235905  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:13.736128  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.785471  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:13.284642  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:12.923811  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.423162  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.926280  302538 pod_ready.go:93] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:15.926318  302538 pod_ready.go:82] duration metric: took 7.509740963s for pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.926332  302538 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.932683  302538 pod_ready.go:93] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:15.932713  302538 pod_ready.go:82] duration metric: took 6.372388ms for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.932725  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:13.111190  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.606371  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:14.236234  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:14.736677  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.236499  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.735667  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:16.235774  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:16.735833  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:17.236149  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:17.735782  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:18.236400  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:18.736460  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.784441  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:18.284748  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:17.938853  302538 pod_ready.go:103] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:19.939569  302538 pod_ready.go:103] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:18.104867  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:20.105870  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:22.605773  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:19.236298  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:19.736672  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.236401  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.735810  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:21.235673  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:21.736112  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:22.235998  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:22.736179  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:23.236680  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:23.736388  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.783320  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:22.783590  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:21.939753  302538 pod_ready.go:93] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:21.939781  302538 pod_ready.go:82] duration metric: took 6.007035191s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:21.939794  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.446396  302538 pod_ready.go:93] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.446425  302538 pod_ready.go:82] duration metric: took 506.622064ms for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.446435  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zzmkv" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.452105  302538 pod_ready.go:93] pod "kube-proxy-zzmkv" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.452130  302538 pod_ready.go:82] duration metric: took 5.688419ms for pod "kube-proxy-zzmkv" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.452139  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.456181  302538 pod_ready.go:93] pod "kube-scheduler-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.456205  302538 pod_ready.go:82] duration metric: took 4.05917ms for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.456215  302538 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:24.463262  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:24.606021  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:27.105497  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:24.236369  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:24.736082  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:25.236694  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:25.736346  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.236075  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.736666  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:27.236418  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:27.736656  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:28.235972  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:28.735743  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:24.783673  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:26.783960  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.283970  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:26.962413  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.462423  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.606628  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:32.105603  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.236688  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:29.736132  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:30.236404  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:30.735733  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.236364  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.736031  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:32.236457  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:32.735751  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:33.236371  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:33.736474  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.284572  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:33.286630  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:31.464686  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:33.962309  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:35.963445  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:34.105897  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:36.605140  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:34.236387  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:34.236472  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:34.276702  303486 cri.go:89] found id: ""
	I0920 19:06:34.276735  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.276747  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:34.276758  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:34.276815  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:34.312886  303486 cri.go:89] found id: ""
	I0920 19:06:34.312923  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.312935  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:34.312950  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:34.313024  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:34.347199  303486 cri.go:89] found id: ""
	I0920 19:06:34.347240  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.347250  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:34.347258  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:34.347332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:34.383077  303486 cri.go:89] found id: ""
	I0920 19:06:34.383110  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.383121  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:34.383130  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:34.383202  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:34.421184  303486 cri.go:89] found id: ""
	I0920 19:06:34.421212  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.421222  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:34.421231  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:34.421304  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:34.459964  303486 cri.go:89] found id: ""
	I0920 19:06:34.459998  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.460009  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:34.460018  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:34.460085  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:34.493761  303486 cri.go:89] found id: ""
	I0920 19:06:34.493803  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.493815  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:34.493824  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:34.493894  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:34.534406  303486 cri.go:89] found id: ""
	I0920 19:06:34.534445  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.534457  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:34.534471  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:34.534496  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:34.607256  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:34.607297  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:34.644923  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:34.644953  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:34.693574  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:34.693622  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:34.707703  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:34.707742  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:34.846809  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:37.347895  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:37.377651  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:37.377728  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:37.430034  303486 cri.go:89] found id: ""
	I0920 19:06:37.430071  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.430079  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:37.430087  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:37.430156  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:37.467026  303486 cri.go:89] found id: ""
	I0920 19:06:37.467055  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.467063  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:37.467069  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:37.467120  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:37.505791  303486 cri.go:89] found id: ""
	I0920 19:06:37.505824  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.505835  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:37.505845  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:37.505943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:37.541519  303486 cri.go:89] found id: ""
	I0920 19:06:37.541556  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.541568  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:37.541577  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:37.541633  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:37.576088  303486 cri.go:89] found id: ""
	I0920 19:06:37.576126  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.576137  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:37.576146  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:37.576204  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:37.613039  303486 cri.go:89] found id: ""
	I0920 19:06:37.613074  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.613084  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:37.613091  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:37.613153  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:37.656440  303486 cri.go:89] found id: ""
	I0920 19:06:37.656473  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.656482  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:37.656489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:37.656555  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:37.693247  303486 cri.go:89] found id: ""
	I0920 19:06:37.693283  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.693292  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:37.693302  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:37.693321  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:37.769230  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:37.769280  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:37.811016  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:37.811058  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:37.865729  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:37.865773  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:37.880056  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:37.880094  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:37.956402  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:35.783789  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:37.787063  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:38.461824  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.465028  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:38.605494  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.605606  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.457303  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:40.473769  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:40.473848  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:40.511320  303486 cri.go:89] found id: ""
	I0920 19:06:40.511354  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.511363  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:40.511371  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:40.511433  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:40.547086  303486 cri.go:89] found id: ""
	I0920 19:06:40.547127  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.547138  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:40.547147  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:40.547216  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:40.580969  303486 cri.go:89] found id: ""
	I0920 19:06:40.581010  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.581022  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:40.581035  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:40.581098  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:40.615802  303486 cri.go:89] found id: ""
	I0920 19:06:40.615842  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.615851  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:40.615858  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:40.615931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:40.649398  303486 cri.go:89] found id: ""
	I0920 19:06:40.649444  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.649459  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:40.649467  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:40.649541  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:40.683124  303486 cri.go:89] found id: ""
	I0920 19:06:40.683160  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.683172  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:40.683181  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:40.683251  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:40.718005  303486 cri.go:89] found id: ""
	I0920 19:06:40.718032  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.718040  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:40.718047  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:40.718107  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:40.751965  303486 cri.go:89] found id: ""
	I0920 19:06:40.751992  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.752000  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:40.752010  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:40.752024  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:40.765195  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:40.765234  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:40.842287  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:40.842321  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:40.842338  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:40.928384  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:40.928430  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:40.970207  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:40.970242  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:43.526435  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:43.540582  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:43.540680  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:43.576798  303486 cri.go:89] found id: ""
	I0920 19:06:43.576837  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.576846  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:43.576852  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:43.576916  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:43.615261  303486 cri.go:89] found id: ""
	I0920 19:06:43.615290  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.615298  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:43.615305  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:43.615359  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:43.651214  303486 cri.go:89] found id: ""
	I0920 19:06:43.651251  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.651264  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:43.651277  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:43.651338  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:43.684483  303486 cri.go:89] found id: ""
	I0920 19:06:43.684523  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.684535  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:43.684544  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:43.684614  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:43.720996  303486 cri.go:89] found id: ""
	I0920 19:06:43.721026  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.721035  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:43.721041  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:43.721107  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:43.764445  303486 cri.go:89] found id: ""
	I0920 19:06:43.764482  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.764493  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:43.764501  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:43.764564  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:43.808848  303486 cri.go:89] found id: ""
	I0920 19:06:43.808878  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.808888  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:43.808897  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:43.808968  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:43.845462  303486 cri.go:89] found id: ""
	I0920 19:06:43.845491  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.845500  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:43.845511  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:43.845525  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:43.896550  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:43.896596  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:43.909243  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:43.909272  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:06:40.284735  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:42.783363  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:42.962289  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:44.963071  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:43.106353  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:45.606296  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	W0920 19:06:43.987455  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:43.987474  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:43.987491  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:44.063585  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:44.063629  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:46.602859  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:46.617286  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:46.617357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:46.653643  303486 cri.go:89] found id: ""
	I0920 19:06:46.653681  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.653693  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:46.653702  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:46.653778  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:46.691169  303486 cri.go:89] found id: ""
	I0920 19:06:46.691198  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.691206  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:46.691213  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:46.691271  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:46.725498  303486 cri.go:89] found id: ""
	I0920 19:06:46.725527  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.725538  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:46.725545  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:46.725614  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:46.758850  303486 cri.go:89] found id: ""
	I0920 19:06:46.758876  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.758884  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:46.758891  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:46.758942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:46.793648  303486 cri.go:89] found id: ""
	I0920 19:06:46.793683  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.793692  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:46.793699  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:46.793755  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:46.832908  303486 cri.go:89] found id: ""
	I0920 19:06:46.832940  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.832947  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:46.832953  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:46.833019  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:46.866450  303486 cri.go:89] found id: ""
	I0920 19:06:46.866502  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.866513  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:46.866522  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:46.866593  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:46.901966  303486 cri.go:89] found id: ""
	I0920 19:06:46.902001  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.902013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:46.902026  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:46.902041  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:46.948901  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:46.948946  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:46.963489  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:46.963534  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:47.041701  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:47.041722  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:47.041736  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:47.124320  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:47.124364  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:44.783818  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:46.784000  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:48.785175  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:46.963700  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:49.462018  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:48.104361  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:50.105520  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:52.605799  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:49.664255  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:49.677240  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:49.677322  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:49.712375  303486 cri.go:89] found id: ""
	I0920 19:06:49.712401  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.712409  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:49.712415  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:49.712476  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:49.747682  303486 cri.go:89] found id: ""
	I0920 19:06:49.747713  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.747721  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:49.747727  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:49.747783  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:49.782276  303486 cri.go:89] found id: ""
	I0920 19:06:49.782319  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.782329  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:49.782337  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:49.782400  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:49.822625  303486 cri.go:89] found id: ""
	I0920 19:06:49.822661  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.822672  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:49.822680  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:49.822751  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:49.862159  303486 cri.go:89] found id: ""
	I0920 19:06:49.862192  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.862202  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:49.862212  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:49.862281  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:49.897552  303486 cri.go:89] found id: ""
	I0920 19:06:49.897587  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.897595  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:49.897608  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:49.897667  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:49.931667  303486 cri.go:89] found id: ""
	I0920 19:06:49.931698  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.931709  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:49.931718  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:49.931774  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:49.969206  303486 cri.go:89] found id: ""
	I0920 19:06:49.969236  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.969244  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:49.969254  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:49.969266  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:50.019287  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:50.019328  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:50.033080  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:50.033113  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:50.106415  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:50.106442  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:50.106459  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:50.183710  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:50.183762  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:52.725443  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:52.739293  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:52.739386  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:52.772412  303486 cri.go:89] found id: ""
	I0920 19:06:52.772445  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.772454  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:52.772461  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:52.772528  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:52.811153  303486 cri.go:89] found id: ""
	I0920 19:06:52.811189  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.811197  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:52.811204  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:52.811260  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:52.848709  303486 cri.go:89] found id: ""
	I0920 19:06:52.848740  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.848749  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:52.848755  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:52.848811  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:52.883358  303486 cri.go:89] found id: ""
	I0920 19:06:52.883387  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.883394  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:52.883400  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:52.883455  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:52.917838  303486 cri.go:89] found id: ""
	I0920 19:06:52.917874  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.917893  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:52.917912  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:52.917982  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:52.952340  303486 cri.go:89] found id: ""
	I0920 19:06:52.952378  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.952387  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:52.952396  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:52.952471  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:52.986433  303486 cri.go:89] found id: ""
	I0920 19:06:52.986469  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.986478  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:52.986486  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:52.986582  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:53.024209  303486 cri.go:89] found id: ""
	I0920 19:06:53.024241  303486 logs.go:276] 0 containers: []
	W0920 19:06:53.024249  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:53.024260  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:53.024272  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:53.075336  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:53.075374  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:53.090761  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:53.090802  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:53.167883  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:53.167915  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:53.167933  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:53.242003  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:53.242044  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:50.785624  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:53.284212  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:51.462197  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:53.962545  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:55.962875  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:54.607806  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:57.105146  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:55.779107  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:55.793713  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:55.793802  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:55.829411  303486 cri.go:89] found id: ""
	I0920 19:06:55.829441  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.829450  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:55.829456  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:55.829513  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:55.864578  303486 cri.go:89] found id: ""
	I0920 19:06:55.864606  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.864617  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:55.864625  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:55.864686  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:55.897004  303486 cri.go:89] found id: ""
	I0920 19:06:55.897033  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.897041  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:55.897048  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:55.897106  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:55.931019  303486 cri.go:89] found id: ""
	I0920 19:06:55.931055  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.931066  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:55.931076  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:55.931141  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:55.966595  303486 cri.go:89] found id: ""
	I0920 19:06:55.966625  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.966635  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:55.966643  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:55.966693  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:55.999707  303486 cri.go:89] found id: ""
	I0920 19:06:55.999736  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.999747  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:55.999756  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:55.999825  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:56.034323  303486 cri.go:89] found id: ""
	I0920 19:06:56.034361  303486 logs.go:276] 0 containers: []
	W0920 19:06:56.034371  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:56.034377  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:56.034433  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:56.069019  303486 cri.go:89] found id: ""
	I0920 19:06:56.069048  303486 logs.go:276] 0 containers: []
	W0920 19:06:56.069056  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:56.069066  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:56.069077  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:56.122820  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:56.122860  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:56.136924  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:56.136966  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:56.216255  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:56.216284  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:56.216299  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:56.293461  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:56.293506  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:58.831252  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:58.844410  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:58.844474  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:58.877508  303486 cri.go:89] found id: ""
	I0920 19:06:58.877539  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.877547  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:58.877555  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:58.877613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:58.911284  303486 cri.go:89] found id: ""
	I0920 19:06:58.911315  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.911323  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:58.911329  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:58.911382  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:58.944646  303486 cri.go:89] found id: ""
	I0920 19:06:58.944675  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.944682  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:58.944688  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:58.944739  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:55.784379  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.283450  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.461839  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:00.461977  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:59.108066  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:01.605247  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.979752  303486 cri.go:89] found id: ""
	I0920 19:06:58.979787  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.979798  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:58.979807  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:58.979864  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:59.016613  303486 cri.go:89] found id: ""
	I0920 19:06:59.016649  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.016661  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:59.016670  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:59.016735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:59.052012  303486 cri.go:89] found id: ""
	I0920 19:06:59.052039  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.052047  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:59.052054  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:59.052106  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:59.090102  303486 cri.go:89] found id: ""
	I0920 19:06:59.090140  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.090152  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:59.090159  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:59.090213  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:59.128028  303486 cri.go:89] found id: ""
	I0920 19:06:59.128057  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.128068  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:59.128080  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:59.128096  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:59.142966  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:59.143012  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:59.227311  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:59.227336  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:59.227357  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:59.308319  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:59.308366  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:59.347299  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:59.347336  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:01.897644  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:01.912876  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:01.912951  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:01.956550  303486 cri.go:89] found id: ""
	I0920 19:07:01.956679  303486 logs.go:276] 0 containers: []
	W0920 19:07:01.956690  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:01.956700  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:01.956765  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:01.995391  303486 cri.go:89] found id: ""
	I0920 19:07:01.995425  303486 logs.go:276] 0 containers: []
	W0920 19:07:01.995433  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:01.995440  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:01.995501  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:02.031149  303486 cri.go:89] found id: ""
	I0920 19:07:02.031181  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.031193  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:02.031202  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:02.031273  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:02.065856  303486 cri.go:89] found id: ""
	I0920 19:07:02.065885  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.065894  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:02.065924  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:02.065981  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:02.101974  303486 cri.go:89] found id: ""
	I0920 19:07:02.102018  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.102032  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:02.102041  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:02.102115  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:02.138108  303486 cri.go:89] found id: ""
	I0920 19:07:02.138142  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.138151  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:02.138156  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:02.138217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:02.170136  303486 cri.go:89] found id: ""
	I0920 19:07:02.170165  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.170173  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:02.170179  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:02.170244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:02.203944  303486 cri.go:89] found id: ""
	I0920 19:07:02.203969  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.203978  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:02.203991  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:02.204008  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:02.256635  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:02.256679  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:02.270266  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:02.270303  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:02.341145  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:02.341182  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:02.341199  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:02.415133  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:02.415175  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:00.283726  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:02.285304  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:02.462310  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:04.462872  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:03.605300  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:06.105872  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:04.952448  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:04.966632  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:04.966702  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:05.001098  303486 cri.go:89] found id: ""
	I0920 19:07:05.001131  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.001141  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:05.001149  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:05.001217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:05.038160  303486 cri.go:89] found id: ""
	I0920 19:07:05.038186  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.038196  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:05.038202  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:05.038260  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:05.083301  303486 cri.go:89] found id: ""
	I0920 19:07:05.083346  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.083357  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:05.083365  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:05.083436  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:05.118916  303486 cri.go:89] found id: ""
	I0920 19:07:05.118952  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.118964  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:05.118972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:05.119065  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:05.157452  303486 cri.go:89] found id: ""
	I0920 19:07:05.157485  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.157496  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:05.157511  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:05.157587  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:05.197100  303486 cri.go:89] found id: ""
	I0920 19:07:05.197133  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.197143  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:05.197152  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:05.197225  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:05.231286  303486 cri.go:89] found id: ""
	I0920 19:07:05.231317  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.231328  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:05.231336  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:05.231409  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:05.269798  303486 cri.go:89] found id: ""
	I0920 19:07:05.269835  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.269847  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:05.269862  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:05.269882  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:05.310029  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:05.310068  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:05.360493  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:05.360537  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:05.373771  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:05.373815  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:05.449860  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:05.449886  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:05.449924  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:08.034520  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:08.049970  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:08.050040  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:08.084683  303486 cri.go:89] found id: ""
	I0920 19:07:08.084714  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.084724  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:08.084731  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:08.084799  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:08.121150  303486 cri.go:89] found id: ""
	I0920 19:07:08.121176  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.121183  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:08.121190  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:08.121244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:08.157830  303486 cri.go:89] found id: ""
	I0920 19:07:08.157865  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.157877  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:08.157891  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:08.157967  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:08.191040  303486 cri.go:89] found id: ""
	I0920 19:07:08.191082  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.191094  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:08.191102  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:08.191169  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:08.230194  303486 cri.go:89] found id: ""
	I0920 19:07:08.230230  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.230239  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:08.230246  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:08.230304  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:08.268526  303486 cri.go:89] found id: ""
	I0920 19:07:08.268558  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.268566  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:08.268573  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:08.268631  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:08.302383  303486 cri.go:89] found id: ""
	I0920 19:07:08.302411  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.302420  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:08.302428  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:08.302492  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:08.336435  303486 cri.go:89] found id: ""
	I0920 19:07:08.336469  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.336479  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:08.336491  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:08.336505  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:08.418086  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:08.418129  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:08.458355  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:08.458391  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:08.507017  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:08.507062  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:08.522701  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:08.522737  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:08.592777  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:04.784475  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:07.283612  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:09.286218  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:06.963106  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:09.462861  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:08.108458  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:10.605447  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:12.605992  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:11.093689  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:11.107438  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:11.107503  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:11.139701  303486 cri.go:89] found id: ""
	I0920 19:07:11.139742  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.139755  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:11.139765  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:11.139822  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:11.196143  303486 cri.go:89] found id: ""
	I0920 19:07:11.196182  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.196191  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:11.196197  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:11.196268  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:11.232121  303486 cri.go:89] found id: ""
	I0920 19:07:11.232156  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.232164  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:11.232171  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:11.232238  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:11.267307  303486 cri.go:89] found id: ""
	I0920 19:07:11.267338  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.267349  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:11.267358  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:11.267423  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:11.306583  303486 cri.go:89] found id: ""
	I0920 19:07:11.306614  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.306623  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:11.306631  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:11.306698  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:11.348162  303486 cri.go:89] found id: ""
	I0920 19:07:11.348188  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.348196  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:11.348203  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:11.348257  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:11.383612  303486 cri.go:89] found id: ""
	I0920 19:07:11.383649  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.383660  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:11.383669  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:11.383736  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:11.417538  303486 cri.go:89] found id: ""
	I0920 19:07:11.417575  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.417583  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:11.417593  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:11.417609  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:11.470242  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:11.470282  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:11.485448  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:11.485480  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:11.559466  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:11.559495  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:11.559513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:11.636080  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:11.636133  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:11.783461  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:13.783785  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:11.462940  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:13.963340  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:14.609611  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:17.105222  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:14.177278  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:14.190413  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:14.190483  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:14.224238  303486 cri.go:89] found id: ""
	I0920 19:07:14.224264  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.224272  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:14.224278  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:14.224330  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:14.265253  303486 cri.go:89] found id: ""
	I0920 19:07:14.265285  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.265297  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:14.265304  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:14.265357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:14.300591  303486 cri.go:89] found id: ""
	I0920 19:07:14.300619  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.300633  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:14.300639  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:14.300695  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:14.335638  303486 cri.go:89] found id: ""
	I0920 19:07:14.335669  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.335677  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:14.335683  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:14.335735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:14.369291  303486 cri.go:89] found id: ""
	I0920 19:07:14.369328  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.369336  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:14.369344  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:14.369397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:14.404913  303486 cri.go:89] found id: ""
	I0920 19:07:14.404947  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.404958  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:14.404967  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:14.405034  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:14.438793  303486 cri.go:89] found id: ""
	I0920 19:07:14.438834  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.438845  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:14.438856  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:14.438926  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:14.475268  303486 cri.go:89] found id: ""
	I0920 19:07:14.475297  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.475305  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:14.475321  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:14.475342  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:14.528066  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:14.528126  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:14.542850  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:14.542891  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:14.612772  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:14.612800  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:14.612819  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:14.694528  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:14.694579  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:17.234389  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:17.247479  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:17.247544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:17.285461  303486 cri.go:89] found id: ""
	I0920 19:07:17.285488  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.285496  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:17.285502  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:17.285553  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:17.320580  303486 cri.go:89] found id: ""
	I0920 19:07:17.320606  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.320614  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:17.320620  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:17.320677  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:17.356405  303486 cri.go:89] found id: ""
	I0920 19:07:17.356440  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.356462  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:17.356471  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:17.356526  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:17.391268  303486 cri.go:89] found id: ""
	I0920 19:07:17.391301  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.391309  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:17.391316  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:17.391381  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:17.429886  303486 cri.go:89] found id: ""
	I0920 19:07:17.429938  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.429950  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:17.429959  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:17.430022  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:17.466059  303486 cri.go:89] found id: ""
	I0920 19:07:17.466093  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.466104  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:17.466111  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:17.466176  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:17.501128  303486 cri.go:89] found id: ""
	I0920 19:07:17.501159  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.501168  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:17.501174  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:17.501247  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:17.536969  303486 cri.go:89] found id: ""
	I0920 19:07:17.536999  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.537007  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:17.537016  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:17.537031  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:17.592071  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:17.592119  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:17.609022  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:17.609057  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:17.696393  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:17.696420  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:17.696434  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:17.778077  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:17.778122  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:15.785002  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:18.284101  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:16.463809  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:18.964348  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:19.604758  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:21.608192  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:20.319211  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:20.332158  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:20.332235  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:20.366195  303486 cri.go:89] found id: ""
	I0920 19:07:20.366230  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.366241  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:20.366250  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:20.366313  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:20.401786  303486 cri.go:89] found id: ""
	I0920 19:07:20.401819  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.401829  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:20.401846  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:20.401943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:20.433684  303486 cri.go:89] found id: ""
	I0920 19:07:20.433711  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.433719  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:20.433725  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:20.433783  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:20.469495  303486 cri.go:89] found id: ""
	I0920 19:07:20.469524  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.469535  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:20.469543  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:20.469613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:20.502214  303486 cri.go:89] found id: ""
	I0920 19:07:20.502245  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.502256  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:20.502263  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:20.502329  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:20.535829  303486 cri.go:89] found id: ""
	I0920 19:07:20.535867  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.535879  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:20.535887  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:20.535952  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:20.569605  303486 cri.go:89] found id: ""
	I0920 19:07:20.569635  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.569643  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:20.569654  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:20.569726  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:20.603676  303486 cri.go:89] found id: ""
	I0920 19:07:20.603699  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.603706  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:20.603715  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:20.603726  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:20.656645  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:20.656692  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:20.671077  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:20.671107  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:20.740996  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:20.741028  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:20.741046  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:20.820541  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:20.820592  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:23.362973  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:23.380350  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:23.380432  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:23.423145  303486 cri.go:89] found id: ""
	I0920 19:07:23.423183  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.423193  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:23.423202  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:23.423272  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:23.459019  303486 cri.go:89] found id: ""
	I0920 19:07:23.459057  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.459068  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:23.459077  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:23.459144  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:23.502876  303486 cri.go:89] found id: ""
	I0920 19:07:23.502908  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.502920  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:23.502929  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:23.502994  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:23.538440  303486 cri.go:89] found id: ""
	I0920 19:07:23.538471  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.538481  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:23.538489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:23.538552  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:23.575164  303486 cri.go:89] found id: ""
	I0920 19:07:23.575199  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.575211  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:23.575220  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:23.575296  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:23.610449  303486 cri.go:89] found id: ""
	I0920 19:07:23.610480  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.610489  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:23.610495  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:23.610562  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:23.644164  303486 cri.go:89] found id: ""
	I0920 19:07:23.644195  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.644203  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:23.644209  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:23.644275  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:23.684379  303486 cri.go:89] found id: ""
	I0920 19:07:23.684417  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.684428  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:23.684442  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:23.684459  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:23.762838  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:23.762885  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:23.805616  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:23.805650  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:23.857080  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:23.857122  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:23.870602  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:23.870635  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:23.941187  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:20.284264  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:22.284388  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:24.285108  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:21.462493  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:23.467933  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:25.963071  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:24.106087  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:26.605442  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:26.441571  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:26.455091  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:26.455185  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:26.489658  303486 cri.go:89] found id: ""
	I0920 19:07:26.489696  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.489707  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:26.489716  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:26.489773  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:26.528829  303486 cri.go:89] found id: ""
	I0920 19:07:26.528865  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.528878  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:26.528886  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:26.528966  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:26.568402  303486 cri.go:89] found id: ""
	I0920 19:07:26.568429  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.568443  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:26.568450  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:26.568503  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:26.606654  303486 cri.go:89] found id: ""
	I0920 19:07:26.606683  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.606693  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:26.606701  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:26.606764  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:26.640825  303486 cri.go:89] found id: ""
	I0920 19:07:26.640856  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.640864  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:26.640871  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:26.640934  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:26.677023  303486 cri.go:89] found id: ""
	I0920 19:07:26.677054  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.677062  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:26.677068  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:26.677123  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:26.712921  303486 cri.go:89] found id: ""
	I0920 19:07:26.712956  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.712964  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:26.712971  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:26.713031  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:26.747750  303486 cri.go:89] found id: ""
	I0920 19:07:26.747778  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.747786  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:26.747796  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:26.747810  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:26.799240  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:26.799283  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:26.813197  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:26.813233  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:26.882751  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:26.882780  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:26.882799  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:26.965108  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:26.965146  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:26.784306  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:29.283573  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:28.461526  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:30.462242  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:28.606602  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:31.106657  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:29.503960  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:29.516601  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:29.516669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:29.555581  303486 cri.go:89] found id: ""
	I0920 19:07:29.555622  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.555632  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:29.555640  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:29.555711  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:29.593858  303486 cri.go:89] found id: ""
	I0920 19:07:29.593885  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.593923  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:29.593937  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:29.593990  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:29.629507  303486 cri.go:89] found id: ""
	I0920 19:07:29.629538  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.629548  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:29.629557  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:29.629616  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:29.662880  303486 cri.go:89] found id: ""
	I0920 19:07:29.662913  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.662921  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:29.662928  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:29.662976  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:29.695422  303486 cri.go:89] found id: ""
	I0920 19:07:29.695448  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.695458  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:29.695466  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:29.695531  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:29.730641  303486 cri.go:89] found id: ""
	I0920 19:07:29.730673  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.730685  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:29.730693  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:29.730756  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:29.764186  303486 cri.go:89] found id: ""
	I0920 19:07:29.764220  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.764229  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:29.764238  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:29.764302  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:29.804146  303486 cri.go:89] found id: ""
	I0920 19:07:29.804174  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.804182  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:29.804191  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:29.804204  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:29.885573  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:29.885633  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:29.924619  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:29.924667  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:29.978187  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:29.978230  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:29.992161  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:29.992190  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:30.069767  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:32.570197  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:32.583160  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:32.583244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:32.620842  303486 cri.go:89] found id: ""
	I0920 19:07:32.620870  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.620881  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:32.620899  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:32.620958  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:32.657169  303486 cri.go:89] found id: ""
	I0920 19:07:32.657205  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.657216  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:32.657225  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:32.657292  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:32.694773  303486 cri.go:89] found id: ""
	I0920 19:07:32.694802  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.694809  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:32.694815  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:32.694882  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:32.733318  303486 cri.go:89] found id: ""
	I0920 19:07:32.733350  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.733360  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:32.733370  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:32.733436  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:32.766019  303486 cri.go:89] found id: ""
	I0920 19:07:32.766052  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.766062  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:32.766070  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:32.766138  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:32.801412  303486 cri.go:89] found id: ""
	I0920 19:07:32.801443  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.801454  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:32.801463  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:32.801533  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:32.833743  303486 cri.go:89] found id: ""
	I0920 19:07:32.833771  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.833779  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:32.833787  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:32.833847  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:32.866775  303486 cri.go:89] found id: ""
	I0920 19:07:32.866803  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.866811  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:32.866821  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:32.866839  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:32.919257  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:32.919310  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:32.933554  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:32.933602  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:33.002657  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:33.002702  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:33.002721  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:33.081271  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:33.081316  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:31.284488  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:33.782998  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:32.462645  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:34.963285  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:33.609072  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:36.107460  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:35.627131  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:35.640958  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:35.641032  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:35.675943  303486 cri.go:89] found id: ""
	I0920 19:07:35.675976  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.675984  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:35.675991  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:35.676044  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:35.710075  303486 cri.go:89] found id: ""
	I0920 19:07:35.710104  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.710116  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:35.710124  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:35.710194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:35.747890  303486 cri.go:89] found id: ""
	I0920 19:07:35.747920  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.747931  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:35.747939  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:35.748004  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:35.786197  303486 cri.go:89] found id: ""
	I0920 19:07:35.786231  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.786242  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:35.786252  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:35.786314  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:35.819109  303486 cri.go:89] found id: ""
	I0920 19:07:35.819146  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.819158  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:35.819168  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:35.819244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:35.853244  303486 cri.go:89] found id: ""
	I0920 19:07:35.853282  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.853292  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:35.853301  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:35.853378  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:35.886864  303486 cri.go:89] found id: ""
	I0920 19:07:35.886897  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.886908  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:35.886917  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:35.886986  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:35.920872  303486 cri.go:89] found id: ""
	I0920 19:07:35.920906  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.920917  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:35.920939  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:35.920957  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:35.998741  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:35.998794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:36.040681  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:36.040720  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:36.095848  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:36.095909  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:36.110903  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:36.110939  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:36.186658  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:38.687762  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:38.701640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:38.701708  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:38.734908  303486 cri.go:89] found id: ""
	I0920 19:07:38.734946  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.734956  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:38.734966  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:38.735031  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:38.768062  303486 cri.go:89] found id: ""
	I0920 19:07:38.768100  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.768112  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:38.768120  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:38.768188  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:38.800881  303486 cri.go:89] found id: ""
	I0920 19:07:38.800915  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.800927  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:38.800936  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:38.801004  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:38.835119  303486 cri.go:89] found id: ""
	I0920 19:07:38.835148  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.835156  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:38.835164  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:38.835223  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:38.872677  303486 cri.go:89] found id: ""
	I0920 19:07:38.872712  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.872723  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:38.872733  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:38.872807  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:38.913921  303486 cri.go:89] found id: ""
	I0920 19:07:38.913955  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.913965  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:38.913972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:38.914029  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:35.783443  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.284549  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:36.963668  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.963893  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.608347  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:41.106313  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.951849  303486 cri.go:89] found id: ""
	I0920 19:07:38.951882  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.951893  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:38.951902  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:38.951972  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:38.988117  303486 cri.go:89] found id: ""
	I0920 19:07:38.988149  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.988161  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:38.988177  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:38.988191  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:39.028804  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:39.028843  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:39.083374  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:39.083427  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:39.097434  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:39.097463  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:39.172185  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:39.172213  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:39.172226  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:41.756648  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:41.772358  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:41.772432  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:41.809067  303486 cri.go:89] found id: ""
	I0920 19:07:41.809109  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.809123  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:41.809132  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:41.809191  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:41.853413  303486 cri.go:89] found id: ""
	I0920 19:07:41.853445  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.853457  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:41.853465  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:41.853524  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:41.891536  303486 cri.go:89] found id: ""
	I0920 19:07:41.891569  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.891580  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:41.891588  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:41.891668  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:41.931046  303486 cri.go:89] found id: ""
	I0920 19:07:41.931085  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.931093  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:41.931099  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:41.931155  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:41.968120  303486 cri.go:89] found id: ""
	I0920 19:07:41.968152  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.968164  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:41.968172  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:41.968240  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:42.002478  303486 cri.go:89] found id: ""
	I0920 19:07:42.002512  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.002523  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:42.002532  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:42.002599  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:42.038031  303486 cri.go:89] found id: ""
	I0920 19:07:42.038067  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.038080  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:42.038087  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:42.038150  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:42.072124  303486 cri.go:89] found id: ""
	I0920 19:07:42.072155  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.072166  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:42.072178  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:42.072195  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:42.128217  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:42.128259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:42.142291  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:42.142322  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:42.215278  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:42.215305  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:42.215324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:42.293431  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:42.293476  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:40.784191  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.283580  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:41.463429  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.963059  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.608790  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:46.105338  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:44.836094  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:44.850327  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:44.850397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:44.884595  303486 cri.go:89] found id: ""
	I0920 19:07:44.884624  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.884632  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:44.884639  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:44.884711  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:44.917727  303486 cri.go:89] found id: ""
	I0920 19:07:44.917754  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.917763  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:44.917769  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:44.917837  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:44.955821  303486 cri.go:89] found id: ""
	I0920 19:07:44.955860  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.955871  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:44.955879  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:44.955937  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:44.994543  303486 cri.go:89] found id: ""
	I0920 19:07:44.994579  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.994590  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:44.994598  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:44.994651  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:45.031839  303486 cri.go:89] found id: ""
	I0920 19:07:45.031877  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.031888  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:45.031896  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:45.031962  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:45.070554  303486 cri.go:89] found id: ""
	I0920 19:07:45.070588  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.070601  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:45.070609  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:45.070678  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:45.108727  303486 cri.go:89] found id: ""
	I0920 19:07:45.108760  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.108771  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:45.108779  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:45.108855  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:45.144045  303486 cri.go:89] found id: ""
	I0920 19:07:45.144075  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.144083  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:45.144094  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:45.144108  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:45.185800  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:45.185834  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:45.238364  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:45.238410  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:45.252111  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:45.252145  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:45.329009  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:45.329036  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:45.329051  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:47.912910  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:47.926378  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:47.926458  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:47.961067  303486 cri.go:89] found id: ""
	I0920 19:07:47.961094  303486 logs.go:276] 0 containers: []
	W0920 19:07:47.961103  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:47.961111  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:47.961172  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:48.006680  303486 cri.go:89] found id: ""
	I0920 19:07:48.006717  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.006729  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:48.006738  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:48.006805  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:48.042230  303486 cri.go:89] found id: ""
	I0920 19:07:48.042261  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.042272  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:48.042281  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:48.042349  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:48.080779  303486 cri.go:89] found id: ""
	I0920 19:07:48.080836  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.080850  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:48.080860  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:48.080931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:48.119439  303486 cri.go:89] found id: ""
	I0920 19:07:48.119469  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.119477  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:48.119483  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:48.119536  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:48.156219  303486 cri.go:89] found id: ""
	I0920 19:07:48.156258  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.156269  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:48.156279  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:48.156354  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:48.192112  303486 cri.go:89] found id: ""
	I0920 19:07:48.192151  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.192162  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:48.192170  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:48.192240  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:48.228916  303486 cri.go:89] found id: ""
	I0920 19:07:48.228958  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.228968  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:48.228981  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:48.229003  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:48.284073  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:48.284115  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:48.297677  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:48.297713  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:48.374834  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:48.374860  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:48.374876  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:48.455468  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:48.455512  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:45.284055  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:47.783744  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:46.461832  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:48.462980  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:50.463485  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:48.605035  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:51.105952  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:50.998354  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:51.012827  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:51.012904  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:51.046701  303486 cri.go:89] found id: ""
	I0920 19:07:51.046739  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.046750  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:51.046758  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:51.046827  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:51.083829  303486 cri.go:89] found id: ""
	I0920 19:07:51.083867  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.083878  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:51.083891  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:51.083965  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:51.124126  303486 cri.go:89] found id: ""
	I0920 19:07:51.124170  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.124180  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:51.124187  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:51.124254  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:51.159141  303486 cri.go:89] found id: ""
	I0920 19:07:51.159175  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.159184  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:51.159190  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:51.159253  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:51.192793  303486 cri.go:89] found id: ""
	I0920 19:07:51.192829  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.192840  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:51.192863  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:51.192938  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:51.225489  303486 cri.go:89] found id: ""
	I0920 19:07:51.225515  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.225524  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:51.225530  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:51.225582  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:51.258256  303486 cri.go:89] found id: ""
	I0920 19:07:51.258283  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.258294  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:51.258301  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:51.258363  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:51.292474  303486 cri.go:89] found id: ""
	I0920 19:07:51.292504  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.292512  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:51.292522  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:51.292537  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:51.331386  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:51.331422  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:51.385136  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:51.385182  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:51.400792  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:51.400828  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:51.492771  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:51.492795  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:51.492810  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:49.784132  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:52.284075  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:54.284870  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:52.963813  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:55.464095  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:53.607259  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:56.106592  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:54.074889  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:54.088453  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:54.088534  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:54.125096  303486 cri.go:89] found id: ""
	I0920 19:07:54.125138  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.125159  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:54.125166  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:54.125231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:54.159630  303486 cri.go:89] found id: ""
	I0920 19:07:54.159665  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.159676  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:54.159685  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:54.159759  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:54.195919  303486 cri.go:89] found id: ""
	I0920 19:07:54.195951  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.195965  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:54.195972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:54.196042  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:54.230294  303486 cri.go:89] found id: ""
	I0920 19:07:54.230323  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.230332  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:54.230339  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:54.230396  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:54.266764  303486 cri.go:89] found id: ""
	I0920 19:07:54.266793  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.266800  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:54.266807  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:54.266865  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:54.300704  303486 cri.go:89] found id: ""
	I0920 19:07:54.300731  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.300741  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:54.300750  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:54.300817  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:54.334447  303486 cri.go:89] found id: ""
	I0920 19:07:54.334473  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.334480  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:54.334487  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:54.334546  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:54.369814  303486 cri.go:89] found id: ""
	I0920 19:07:54.369858  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.369866  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:54.369878  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:54.369890  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:54.423088  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:54.423135  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:54.436770  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:54.436801  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:54.510731  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:54.510757  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:54.510773  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:54.593041  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:54.593091  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:57.134030  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:57.147605  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:57.147674  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:57.202662  303486 cri.go:89] found id: ""
	I0920 19:07:57.202690  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.202699  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:57.202705  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:57.202757  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:57.236448  303486 cri.go:89] found id: ""
	I0920 19:07:57.236476  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.236484  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:57.236493  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:57.236558  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:57.269450  303486 cri.go:89] found id: ""
	I0920 19:07:57.269478  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.269485  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:57.269491  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:57.269544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:57.305749  303486 cri.go:89] found id: ""
	I0920 19:07:57.305784  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.305795  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:57.305806  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:57.305877  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:57.339802  303486 cri.go:89] found id: ""
	I0920 19:07:57.339844  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.339857  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:57.339866  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:57.339942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:57.371929  303486 cri.go:89] found id: ""
	I0920 19:07:57.371962  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.371971  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:57.371980  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:57.372051  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:57.405749  303486 cri.go:89] found id: ""
	I0920 19:07:57.405789  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.405802  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:57.405812  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:57.405888  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:57.439259  303486 cri.go:89] found id: ""
	I0920 19:07:57.439291  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.439300  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:57.439310  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:57.439323  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:57.491405  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:57.491450  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:57.505992  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:57.506027  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:57.580598  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:57.580623  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:57.580638  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:57.659475  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:57.659513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:56.783867  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:58.783944  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:57.465789  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:59.963589  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:58.606492  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:01.105967  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:00.201478  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:00.217162  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:00.217228  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:00.252219  303486 cri.go:89] found id: ""
	I0920 19:08:00.252247  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.252256  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:00.252263  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:00.252334  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:00.287244  303486 cri.go:89] found id: ""
	I0920 19:08:00.287283  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.287295  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:00.287302  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:00.287367  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:00.325785  303486 cri.go:89] found id: ""
	I0920 19:08:00.325818  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.325829  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:00.325839  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:00.325931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:00.359718  303486 cri.go:89] found id: ""
	I0920 19:08:00.359747  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.359757  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:00.359766  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:00.359847  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:00.399105  303486 cri.go:89] found id: ""
	I0920 19:08:00.399147  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.399156  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:00.399163  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:00.399227  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:00.433647  303486 cri.go:89] found id: ""
	I0920 19:08:00.433675  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.433683  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:00.433692  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:00.433756  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:00.467771  303486 cri.go:89] found id: ""
	I0920 19:08:00.467820  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.467832  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:00.467841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:00.467911  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:00.511320  303486 cri.go:89] found id: ""
	I0920 19:08:00.511363  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.511376  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:00.511392  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:00.511414  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:00.594669  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:00.594703  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:00.594723  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:00.672747  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:00.672800  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:00.710001  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:00.710049  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:00.760333  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:00.760378  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:03.274393  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:03.289260  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:03.289352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:03.327884  303486 cri.go:89] found id: ""
	I0920 19:08:03.327919  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.327932  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:03.327942  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:03.328015  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:03.367259  303486 cri.go:89] found id: ""
	I0920 19:08:03.367289  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.367297  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:03.367303  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:03.367361  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:03.405843  303486 cri.go:89] found id: ""
	I0920 19:08:03.405899  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.405932  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:03.405942  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:03.406056  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:03.441026  303486 cri.go:89] found id: ""
	I0920 19:08:03.441058  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.441069  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:03.441078  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:03.441147  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:03.477213  303486 cri.go:89] found id: ""
	I0920 19:08:03.477249  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.477261  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:03.477327  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:03.477415  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:03.515843  303486 cri.go:89] found id: ""
	I0920 19:08:03.515880  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.515888  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:03.515895  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:03.515945  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:03.566972  303486 cri.go:89] found id: ""
	I0920 19:08:03.567009  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.567020  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:03.567028  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:03.567097  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:03.616957  303486 cri.go:89] found id: ""
	I0920 19:08:03.617000  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.617013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:03.617029  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:03.617048  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:03.683140  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:03.683192  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:03.697225  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:03.697267  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:03.770430  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:03.770455  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:03.770478  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:03.848796  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:03.848836  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:01.284245  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:03.284437  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:01.964058  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:04.462786  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:03.607506  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.106008  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.387706  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:06.401600  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:06.401669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:06.437854  303486 cri.go:89] found id: ""
	I0920 19:08:06.437890  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.437917  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:06.437926  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:06.437993  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:06.472617  303486 cri.go:89] found id: ""
	I0920 19:08:06.472647  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.472655  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:06.472662  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:06.472718  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:06.510083  303486 cri.go:89] found id: ""
	I0920 19:08:06.510118  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.510131  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:06.510140  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:06.510212  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:06.546388  303486 cri.go:89] found id: ""
	I0920 19:08:06.546418  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.546427  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:06.546434  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:06.546485  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:06.584043  303486 cri.go:89] found id: ""
	I0920 19:08:06.584084  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.584096  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:06.584106  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:06.584182  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:06.622118  303486 cri.go:89] found id: ""
	I0920 19:08:06.622147  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.622155  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:06.622161  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:06.622217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:06.655513  303486 cri.go:89] found id: ""
	I0920 19:08:06.655552  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.655585  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:06.655593  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:06.655657  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:06.690286  303486 cri.go:89] found id: ""
	I0920 19:08:06.690324  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.690336  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:06.690350  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:06.690368  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:06.729229  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:06.729259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:06.780368  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:06.780411  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:06.794746  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:06.794782  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:06.866918  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:06.866944  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:06.866967  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:05.784123  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.284383  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.462855  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.466867  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:10.963736  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.106490  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:10.606291  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:09.451583  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:09.465111  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:09.465178  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:09.497679  303486 cri.go:89] found id: ""
	I0920 19:08:09.497713  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.497725  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:09.497733  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:09.497797  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:09.535297  303486 cri.go:89] found id: ""
	I0920 19:08:09.535334  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.535345  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:09.535353  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:09.535427  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:09.572449  303486 cri.go:89] found id: ""
	I0920 19:08:09.572482  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.572491  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:09.572498  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:09.572608  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:09.612672  303486 cri.go:89] found id: ""
	I0920 19:08:09.612697  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.612705  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:09.612711  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:09.612797  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:09.654366  303486 cri.go:89] found id: ""
	I0920 19:08:09.654399  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.654408  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:09.654415  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:09.654470  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:09.694825  303486 cri.go:89] found id: ""
	I0920 19:08:09.694858  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.694870  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:09.694878  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:09.694942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:09.731618  303486 cri.go:89] found id: ""
	I0920 19:08:09.731682  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.731693  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:09.731702  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:09.731775  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:09.766717  303486 cri.go:89] found id: ""
	I0920 19:08:09.766755  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.766765  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:09.766779  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:09.766794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:09.823505  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:09.823549  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:09.837622  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:09.837658  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:09.919105  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:09.919139  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:09.919156  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:10.000899  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:10.000943  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:12.542974  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:12.557265  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:12.557335  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:12.594099  303486 cri.go:89] found id: ""
	I0920 19:08:12.594126  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.594134  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:12.594140  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:12.594199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:12.627271  303486 cri.go:89] found id: ""
	I0920 19:08:12.627301  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.627308  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:12.627314  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:12.627366  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:12.661225  303486 cri.go:89] found id: ""
	I0920 19:08:12.661256  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.661265  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:12.661272  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:12.661332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:12.701381  303486 cri.go:89] found id: ""
	I0920 19:08:12.701424  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.701437  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:12.701447  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:12.701524  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:12.739189  303486 cri.go:89] found id: ""
	I0920 19:08:12.739227  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.739235  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:12.739246  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:12.739299  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:12.780931  303486 cri.go:89] found id: ""
	I0920 19:08:12.780958  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.781055  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:12.781068  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:12.781124  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:12.818097  303486 cri.go:89] found id: ""
	I0920 19:08:12.818137  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.818150  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:12.818161  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:12.818294  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:12.852925  303486 cri.go:89] found id: ""
	I0920 19:08:12.852957  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.852965  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:12.852975  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:12.852990  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:12.924746  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:12.924774  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:12.924791  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:13.005668  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:13.005718  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:13.044327  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:13.044359  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:13.094788  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:13.094833  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:10.284510  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:12.783546  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:12.964694  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.463615  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:13.105052  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.604922  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.611965  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:15.625857  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:15.625960  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:15.662138  303486 cri.go:89] found id: ""
	I0920 19:08:15.662169  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.662177  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:15.662184  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:15.662261  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:15.696000  303486 cri.go:89] found id: ""
	I0920 19:08:15.696067  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.696100  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:15.696115  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:15.696234  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:15.735594  303486 cri.go:89] found id: ""
	I0920 19:08:15.735625  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.735633  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:15.735640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:15.735699  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:15.774666  303486 cri.go:89] found id: ""
	I0920 19:08:15.774693  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.774703  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:15.774712  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:15.774777  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:15.810754  303486 cri.go:89] found id: ""
	I0920 19:08:15.810799  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.810811  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:15.810820  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:15.810884  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:15.846709  303486 cri.go:89] found id: ""
	I0920 19:08:15.846739  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.846748  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:15.846757  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:15.846819  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:15.880798  303486 cri.go:89] found id: ""
	I0920 19:08:15.880825  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.880833  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:15.880839  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:15.880895  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:15.915119  303486 cri.go:89] found id: ""
	I0920 19:08:15.915150  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.915159  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:15.915170  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:15.915186  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:15.966048  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:15.966087  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:15.979287  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:15.979322  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:16.052129  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:16.052163  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:16.052180  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:16.137743  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:16.137788  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:18.678389  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:18.693073  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:18.693152  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:18.734909  303486 cri.go:89] found id: ""
	I0920 19:08:18.734943  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.734954  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:18.734962  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:18.735028  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:18.773472  303486 cri.go:89] found id: ""
	I0920 19:08:18.773506  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.773517  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:18.773525  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:18.773620  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:18.812184  303486 cri.go:89] found id: ""
	I0920 19:08:18.812218  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.812228  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:18.812236  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:18.812305  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:18.846569  303486 cri.go:89] found id: ""
	I0920 19:08:18.846608  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.846619  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:18.846627  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:18.846700  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:18.881794  303486 cri.go:89] found id: ""
	I0920 19:08:18.881836  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.881862  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:18.881870  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:18.881943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:18.919657  303486 cri.go:89] found id: ""
	I0920 19:08:18.919688  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.919698  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:18.919708  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:18.919774  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:14.784734  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:17.283590  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:19.284056  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:17.962913  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:20.462190  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:18.105736  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:20.106314  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:22.605231  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:18.955117  303486 cri.go:89] found id: ""
	I0920 19:08:18.955146  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.955157  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:18.955166  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:18.955243  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:18.992389  303486 cri.go:89] found id: ""
	I0920 19:08:18.992422  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.992430  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:18.992444  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:18.992460  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:19.070374  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:19.070417  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:19.110793  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:19.110825  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:19.163783  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:19.163830  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:19.177348  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:19.177387  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:19.249469  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:21.749644  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:21.764920  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:21.765006  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:21.803443  303486 cri.go:89] found id: ""
	I0920 19:08:21.803473  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.803481  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:21.803489  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:21.803545  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:21.844552  303486 cri.go:89] found id: ""
	I0920 19:08:21.844582  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.844593  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:21.844601  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:21.844672  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:21.878979  303486 cri.go:89] found id: ""
	I0920 19:08:21.879007  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.879017  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:21.879029  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:21.879099  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:21.915745  303486 cri.go:89] found id: ""
	I0920 19:08:21.915773  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.915783  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:21.915794  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:21.915865  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:21.948999  303486 cri.go:89] found id: ""
	I0920 19:08:21.949031  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.949043  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:21.949052  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:21.949118  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:21.984238  303486 cri.go:89] found id: ""
	I0920 19:08:21.984269  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.984277  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:21.984284  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:21.984357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:22.018581  303486 cri.go:89] found id: ""
	I0920 19:08:22.018610  303486 logs.go:276] 0 containers: []
	W0920 19:08:22.018620  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:22.018628  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:22.018694  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:22.051868  303486 cri.go:89] found id: ""
	I0920 19:08:22.051903  303486 logs.go:276] 0 containers: []
	W0920 19:08:22.051913  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:22.051925  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:22.051942  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:22.106711  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:22.106756  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:22.120910  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:22.120940  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:22.196564  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:22.196591  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:22.196608  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:22.275235  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:22.275288  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:21.785129  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.284359  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:22.463122  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.962694  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:25.105050  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:27.105237  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.821956  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:24.836846  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:24.836918  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:24.878371  303486 cri.go:89] found id: ""
	I0920 19:08:24.878398  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.878406  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:24.878413  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:24.878464  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:24.911450  303486 cri.go:89] found id: ""
	I0920 19:08:24.911480  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.911489  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:24.911497  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:24.911590  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:24.949248  303486 cri.go:89] found id: ""
	I0920 19:08:24.949281  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.949289  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:24.949298  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:24.949352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:24.987899  303486 cri.go:89] found id: ""
	I0920 19:08:24.987932  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.987939  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:24.987948  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:24.988011  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:25.020589  303486 cri.go:89] found id: ""
	I0920 19:08:25.020627  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.020638  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:25.020646  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:25.020701  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:25.060223  303486 cri.go:89] found id: ""
	I0920 19:08:25.060250  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.060258  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:25.060266  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:25.060331  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:25.099111  303486 cri.go:89] found id: ""
	I0920 19:08:25.099141  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.099151  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:25.099160  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:25.099242  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:25.136055  303486 cri.go:89] found id: ""
	I0920 19:08:25.136089  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.136098  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:25.136118  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:25.136135  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:25.187619  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:25.187658  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:25.200983  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:25.201016  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:25.270746  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:25.270778  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:25.270795  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:25.350009  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:25.350050  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:27.889864  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:27.903156  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:27.903231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:27.935087  303486 cri.go:89] found id: ""
	I0920 19:08:27.935118  303486 logs.go:276] 0 containers: []
	W0920 19:08:27.935128  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:27.935138  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:27.935199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:27.970451  303486 cri.go:89] found id: ""
	I0920 19:08:27.970479  303486 logs.go:276] 0 containers: []
	W0920 19:08:27.970487  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:27.970494  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:27.970545  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:28.004931  303486 cri.go:89] found id: ""
	I0920 19:08:28.004980  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.004992  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:28.005002  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:28.005068  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:28.039438  303486 cri.go:89] found id: ""
	I0920 19:08:28.039470  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.039478  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:28.039485  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:28.039535  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:28.076023  303486 cri.go:89] found id: ""
	I0920 19:08:28.076050  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.076058  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:28.076064  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:28.076131  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:28.114726  303486 cri.go:89] found id: ""
	I0920 19:08:28.114761  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.114772  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:28.114781  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:28.114846  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:28.150790  303486 cri.go:89] found id: ""
	I0920 19:08:28.150822  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.150832  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:28.150841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:28.150908  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:28.186576  303486 cri.go:89] found id: ""
	I0920 19:08:28.186606  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.186614  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:28.186626  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:28.186648  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:28.240939  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:28.240984  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:28.255267  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:28.255304  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:28.327773  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:28.327797  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:28.327809  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:28.418011  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:28.418055  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:26.785099  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:29.284297  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:26.962825  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:28.963261  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:30.963575  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:29.605453  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:32.104848  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:30.962398  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:30.975385  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:30.975471  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:31.009898  303486 cri.go:89] found id: ""
	I0920 19:08:31.009952  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.009964  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:31.009973  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:31.010044  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:31.043639  303486 cri.go:89] found id: ""
	I0920 19:08:31.043670  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.043679  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:31.043689  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:31.043758  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:31.077709  303486 cri.go:89] found id: ""
	I0920 19:08:31.077745  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.077753  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:31.077759  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:31.077818  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:31.111117  303486 cri.go:89] found id: ""
	I0920 19:08:31.111150  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.111160  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:31.111168  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:31.111234  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:31.143888  303486 cri.go:89] found id: ""
	I0920 19:08:31.143921  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.143933  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:31.143942  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:31.144014  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:31.176694  303486 cri.go:89] found id: ""
	I0920 19:08:31.176729  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.176742  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:31.176751  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:31.176815  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:31.213794  303486 cri.go:89] found id: ""
	I0920 19:08:31.213832  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.213844  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:31.213854  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:31.213946  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:31.250160  303486 cri.go:89] found id: ""
	I0920 19:08:31.250219  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.250230  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:31.250244  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:31.250261  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:31.263748  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:31.263784  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:31.337719  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:31.337749  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:31.337762  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:31.420398  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:31.420446  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:31.459992  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:31.460030  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:31.284818  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:33.783288  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:33.462900  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:35.463122  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:34.105758  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:36.604917  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:34.014229  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:34.028129  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:34.028194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:34.060793  303486 cri.go:89] found id: ""
	I0920 19:08:34.060832  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.060850  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:34.060859  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:34.060919  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:34.094440  303486 cri.go:89] found id: ""
	I0920 19:08:34.094467  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.094475  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:34.094481  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:34.094544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:34.128824  303486 cri.go:89] found id: ""
	I0920 19:08:34.128861  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.128872  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:34.128881  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:34.128948  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:34.160861  303486 cri.go:89] found id: ""
	I0920 19:08:34.160894  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.160903  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:34.160911  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:34.160967  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:34.196897  303486 cri.go:89] found id: ""
	I0920 19:08:34.196933  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.196952  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:34.196958  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:34.197020  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:34.229083  303486 cri.go:89] found id: ""
	I0920 19:08:34.229115  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.229125  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:34.229134  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:34.229205  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:34.261877  303486 cri.go:89] found id: ""
	I0920 19:08:34.261922  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.261933  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:34.261941  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:34.262008  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:34.296145  303486 cri.go:89] found id: ""
	I0920 19:08:34.296177  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.296189  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:34.296199  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:34.296214  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:34.361598  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:34.361624  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:34.361641  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:34.441067  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:34.441110  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:34.483333  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:34.483362  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:34.538345  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:34.538388  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:37.053155  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:37.067157  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:37.067230  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:37.101432  303486 cri.go:89] found id: ""
	I0920 19:08:37.101466  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.101476  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:37.101485  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:37.101550  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:37.134375  303486 cri.go:89] found id: ""
	I0920 19:08:37.134408  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.134416  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:37.134423  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:37.134487  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:37.167049  303486 cri.go:89] found id: ""
	I0920 19:08:37.167087  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.167099  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:37.167107  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:37.167175  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:37.209358  303486 cri.go:89] found id: ""
	I0920 19:08:37.209387  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.209397  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:37.209405  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:37.209470  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:37.243227  303486 cri.go:89] found id: ""
	I0920 19:08:37.243261  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.243272  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:37.243281  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:37.243332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:37.276546  303486 cri.go:89] found id: ""
	I0920 19:08:37.276596  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.276607  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:37.276626  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:37.276688  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:37.311233  303486 cri.go:89] found id: ""
	I0920 19:08:37.311268  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.311279  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:37.311287  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:37.311352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:37.349970  303486 cri.go:89] found id: ""
	I0920 19:08:37.350003  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.350013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:37.350025  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:37.350041  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:37.399405  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:37.399445  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:37.423764  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:37.423800  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:37.498797  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:37.498826  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:37.498841  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:37.575521  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:37.575566  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:35.783897  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:37.784496  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:37.463224  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:39.463445  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:38.605444  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:40.606712  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:40.118650  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:40.131967  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:40.132051  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:40.165313  303486 cri.go:89] found id: ""
	I0920 19:08:40.165349  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.165358  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:40.165366  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:40.165439  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:40.197194  303486 cri.go:89] found id: ""
	I0920 19:08:40.197223  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.197232  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:40.197238  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:40.197289  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:40.236769  303486 cri.go:89] found id: ""
	I0920 19:08:40.236800  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.236810  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:40.236819  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:40.236888  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:40.271960  303486 cri.go:89] found id: ""
	I0920 19:08:40.271984  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.271992  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:40.271998  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:40.272049  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:40.307874  303486 cri.go:89] found id: ""
	I0920 19:08:40.307909  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.307917  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:40.307923  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:40.307982  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:40.342128  303486 cri.go:89] found id: ""
	I0920 19:08:40.342160  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.342168  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:40.342175  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:40.342233  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:40.381493  303486 cri.go:89] found id: ""
	I0920 19:08:40.381529  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.381542  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:40.381551  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:40.381617  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:40.415164  303486 cri.go:89] found id: ""
	I0920 19:08:40.415199  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.415211  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:40.415222  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:40.415238  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:40.488306  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:40.488330  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:40.488350  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:40.567193  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:40.567235  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:40.607256  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:40.607287  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:40.659504  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:40.659542  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:43.174043  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:43.188690  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:43.188790  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:43.227223  303486 cri.go:89] found id: ""
	I0920 19:08:43.227251  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.227259  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:43.227267  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:43.227356  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:43.260099  303486 cri.go:89] found id: ""
	I0920 19:08:43.260128  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.260137  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:43.260143  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:43.260195  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:43.297846  303486 cri.go:89] found id: ""
	I0920 19:08:43.297875  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.297886  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:43.297894  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:43.297980  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:43.334026  303486 cri.go:89] found id: ""
	I0920 19:08:43.334061  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.334070  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:43.334078  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:43.334147  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:43.367765  303486 cri.go:89] found id: ""
	I0920 19:08:43.367795  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.367806  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:43.367814  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:43.367890  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:43.402722  303486 cri.go:89] found id: ""
	I0920 19:08:43.402766  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.402778  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:43.402787  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:43.402852  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:43.439643  303486 cri.go:89] found id: ""
	I0920 19:08:43.439674  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.439682  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:43.439690  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:43.439742  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:43.475931  303486 cri.go:89] found id: ""
	I0920 19:08:43.475965  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.475976  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:43.475991  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:43.476006  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:43.545694  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:43.545725  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:43.545739  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:43.627493  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:43.627549  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:43.667758  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:43.667794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:43.721803  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:43.721851  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:40.285524  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:42.784336  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:41.962300  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:43.963712  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:45.963766  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:43.105271  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:45.105737  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:47.604667  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:46.237499  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:46.250854  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:46.250925  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:46.288918  303486 cri.go:89] found id: ""
	I0920 19:08:46.288950  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.288957  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:46.288964  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:46.289026  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:46.321113  303486 cri.go:89] found id: ""
	I0920 19:08:46.321149  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.321159  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:46.321168  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:46.321239  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:46.359606  303486 cri.go:89] found id: ""
	I0920 19:08:46.359643  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.359652  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:46.359659  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:46.359729  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:46.397059  303486 cri.go:89] found id: ""
	I0920 19:08:46.397089  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.397098  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:46.397104  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:46.397174  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:46.438224  303486 cri.go:89] found id: ""
	I0920 19:08:46.438261  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.438271  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:46.438279  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:46.438355  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:46.476933  303486 cri.go:89] found id: ""
	I0920 19:08:46.476963  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.476973  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:46.476981  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:46.477047  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:46.522115  303486 cri.go:89] found id: ""
	I0920 19:08:46.522150  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.522160  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:46.522167  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:46.522236  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:46.555508  303486 cri.go:89] found id: ""
	I0920 19:08:46.555541  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.555551  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:46.555565  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:46.555580  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:46.632314  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:46.632358  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:46.672381  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:46.672420  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:46.725777  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:46.725835  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:46.739924  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:46.739959  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:46.816667  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:45.284171  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:47.284420  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.284798  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:48.462088  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:50.463100  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.606279  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:52.105103  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.317620  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:49.331792  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:49.331872  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:49.365417  303486 cri.go:89] found id: ""
	I0920 19:08:49.365457  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.365470  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:49.365479  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:49.365543  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:49.399422  303486 cri.go:89] found id: ""
	I0920 19:08:49.399455  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.399465  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:49.399474  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:49.399532  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:49.433040  303486 cri.go:89] found id: ""
	I0920 19:08:49.433069  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.433076  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:49.433082  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:49.433149  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:49.466865  303486 cri.go:89] found id: ""
	I0920 19:08:49.466897  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.466909  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:49.466917  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:49.466986  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:49.499542  303486 cri.go:89] found id: ""
	I0920 19:08:49.499574  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.499583  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:49.499589  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:49.499639  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:49.534310  303486 cri.go:89] found id: ""
	I0920 19:08:49.534338  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.534346  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:49.534353  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:49.534411  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:49.580271  303486 cri.go:89] found id: ""
	I0920 19:08:49.580297  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.580305  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:49.580312  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:49.580385  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:49.626519  303486 cri.go:89] found id: ""
	I0920 19:08:49.626554  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.626562  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:49.626572  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:49.626587  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:49.682923  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:49.682963  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:49.695859  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:49.695895  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:49.767626  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:49.767669  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:49.767697  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:49.849570  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:49.849614  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:52.387653  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:52.400693  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:52.400757  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:52.434320  303486 cri.go:89] found id: ""
	I0920 19:08:52.434358  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.434369  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:52.434381  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:52.434448  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:52.469167  303486 cri.go:89] found id: ""
	I0920 19:08:52.469202  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.469214  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:52.469222  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:52.469291  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:52.504241  303486 cri.go:89] found id: ""
	I0920 19:08:52.504287  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.504295  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:52.504304  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:52.504367  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:52.539573  303486 cri.go:89] found id: ""
	I0920 19:08:52.539604  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.539613  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:52.539619  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:52.539697  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:52.573794  303486 cri.go:89] found id: ""
	I0920 19:08:52.573821  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.573829  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:52.573834  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:52.573931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:52.607628  303486 cri.go:89] found id: ""
	I0920 19:08:52.607660  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.607670  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:52.607676  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:52.607738  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:52.639088  303486 cri.go:89] found id: ""
	I0920 19:08:52.639121  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.639132  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:52.639140  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:52.639204  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:52.673585  303486 cri.go:89] found id: ""
	I0920 19:08:52.673624  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.673636  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:52.673650  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:52.673667  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:52.726463  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:52.726504  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:52.739520  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:52.739553  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:52.820610  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:52.820638  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:52.820653  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:52.898567  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:52.898612  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:51.783687  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:53.784963  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:52.962326  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:54.963069  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:54.105159  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:56.604367  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:55.440875  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:55.454526  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:55.454602  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:55.490616  303486 cri.go:89] found id: ""
	I0920 19:08:55.490655  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.490664  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:55.490671  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:55.490735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:55.530256  303486 cri.go:89] found id: ""
	I0920 19:08:55.530287  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.530296  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:55.530304  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:55.530357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:55.565209  303486 cri.go:89] found id: ""
	I0920 19:08:55.565242  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.565253  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:55.565260  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:55.565319  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:55.599522  303486 cri.go:89] found id: ""
	I0920 19:08:55.599553  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.599563  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:55.599571  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:55.599634  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:55.634662  303486 cri.go:89] found id: ""
	I0920 19:08:55.634692  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.634700  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:55.634707  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:55.634759  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:55.670326  303486 cri.go:89] found id: ""
	I0920 19:08:55.670361  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.670372  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:55.670379  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:55.670434  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:55.702589  303486 cri.go:89] found id: ""
	I0920 19:08:55.702617  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.702625  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:55.702632  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:55.702694  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:55.737615  303486 cri.go:89] found id: ""
	I0920 19:08:55.737643  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.737653  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:55.737667  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:55.737682  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:55.816827  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:55.816873  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:55.855521  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:55.855550  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:55.905002  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:55.905047  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:55.918292  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:55.918324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:55.987445  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:58.488566  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:58.503898  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:58.504001  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:58.539089  303486 cri.go:89] found id: ""
	I0920 19:08:58.539117  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.539127  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:58.539135  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:58.539199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:58.576432  303486 cri.go:89] found id: ""
	I0920 19:08:58.576459  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.576467  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:58.576473  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:58.576542  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:58.613779  303486 cri.go:89] found id: ""
	I0920 19:08:58.613814  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.613825  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:58.613833  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:58.613932  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:58.648717  303486 cri.go:89] found id: ""
	I0920 19:08:58.648757  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.648768  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:58.648777  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:58.648845  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:58.681533  303486 cri.go:89] found id: ""
	I0920 19:08:58.681568  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.681585  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:58.681593  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:58.681647  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:58.714833  303486 cri.go:89] found id: ""
	I0920 19:08:58.714867  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.714877  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:58.714886  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:58.714951  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:58.755939  303486 cri.go:89] found id: ""
	I0920 19:08:58.755972  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.755980  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:58.755986  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:58.756037  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:58.793195  303486 cri.go:89] found id: ""
	I0920 19:08:58.793229  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.793240  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:58.793252  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:58.793267  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:58.807903  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:58.807939  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:58.873993  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:58.874022  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:58.874042  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:56.283846  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.286474  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:56.963398  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.963513  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.606087  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:01.106199  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.955201  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:58.955249  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:58.994230  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:58.994265  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:01.548403  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:01.561467  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:01.561541  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:01.595339  303486 cri.go:89] found id: ""
	I0920 19:09:01.595374  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.595382  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:01.595388  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:01.595463  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:01.631995  303486 cri.go:89] found id: ""
	I0920 19:09:01.632033  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.632043  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:01.632051  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:01.632119  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:01.667556  303486 cri.go:89] found id: ""
	I0920 19:09:01.667586  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.667596  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:01.667604  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:01.667669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:01.702678  303486 cri.go:89] found id: ""
	I0920 19:09:01.702708  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.702716  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:01.702723  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:01.702786  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:01.739953  303486 cri.go:89] found id: ""
	I0920 19:09:01.739987  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.739999  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:01.740008  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:01.740075  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:01.774188  303486 cri.go:89] found id: ""
	I0920 19:09:01.774222  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.774239  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:01.774249  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:01.774317  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:01.808885  303486 cri.go:89] found id: ""
	I0920 19:09:01.808916  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.808927  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:01.808935  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:01.808997  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:01.842357  303486 cri.go:89] found id: ""
	I0920 19:09:01.842394  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.842404  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:01.842417  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:01.842433  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:01.881750  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:01.881782  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:01.932190  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:01.932236  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:01.946305  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:01.946337  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:02.020099  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:02.020127  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:02.020141  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:00.784428  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.284109  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:01.462613  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.962360  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:05.963735  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.605623  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:06.104994  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:04.601186  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:04.614292  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:04.614374  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:04.649579  303486 cri.go:89] found id: ""
	I0920 19:09:04.649611  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.649619  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:04.649625  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:04.649683  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:04.684039  303486 cri.go:89] found id: ""
	I0920 19:09:04.684076  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.684094  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:04.684108  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:04.684182  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:04.729130  303486 cri.go:89] found id: ""
	I0920 19:09:04.729166  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.729177  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:04.729186  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:04.729244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:04.762646  303486 cri.go:89] found id: ""
	I0920 19:09:04.762682  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.762690  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:04.762697  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:04.762761  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:04.797492  303486 cri.go:89] found id: ""
	I0920 19:09:04.797518  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.797527  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:04.797533  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:04.797588  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:04.832780  303486 cri.go:89] found id: ""
	I0920 19:09:04.832813  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.832823  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:04.832831  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:04.832893  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:04.868489  303486 cri.go:89] found id: ""
	I0920 19:09:04.868526  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.868537  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:04.868546  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:04.868613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:04.901115  303486 cri.go:89] found id: ""
	I0920 19:09:04.901156  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.901164  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:04.901174  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:04.901186  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:04.952435  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:04.952482  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:04.966450  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:04.966481  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:05.035951  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:05.035977  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:05.035991  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:05.120961  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:05.121016  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:07.659497  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:07.672989  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:07.673062  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:07.708200  303486 cri.go:89] found id: ""
	I0920 19:09:07.708236  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.708247  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:07.708256  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:07.708320  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:07.742116  303486 cri.go:89] found id: ""
	I0920 19:09:07.742156  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.742166  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:07.742175  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:07.742231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:07.774369  303486 cri.go:89] found id: ""
	I0920 19:09:07.774401  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.774410  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:07.774419  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:07.774485  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:07.811727  303486 cri.go:89] found id: ""
	I0920 19:09:07.811756  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.811763  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:07.811769  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:07.811825  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:07.849613  303486 cri.go:89] found id: ""
	I0920 19:09:07.849646  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.849655  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:07.849661  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:07.849715  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:07.884643  303486 cri.go:89] found id: ""
	I0920 19:09:07.884679  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.884690  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:07.884698  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:07.884770  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:07.920240  303486 cri.go:89] found id: ""
	I0920 19:09:07.920272  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.920283  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:07.920292  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:07.920371  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:07.954729  303486 cri.go:89] found id: ""
	I0920 19:09:07.954768  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.954780  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:07.954792  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:07.954808  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:08.008679  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:08.008732  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:08.023637  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:08.023673  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:08.097298  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:08.097325  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:08.097340  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:08.173404  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:08.173444  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:05.783765  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.283642  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.462994  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.965062  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.106350  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.605138  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:12.605390  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.718224  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:10.732520  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:10.732593  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:10.766764  303486 cri.go:89] found id: ""
	I0920 19:09:10.766800  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.766811  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:10.766821  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:10.766887  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:10.800039  303486 cri.go:89] found id: ""
	I0920 19:09:10.800077  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.800087  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:10.800095  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:10.800157  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:10.833931  303486 cri.go:89] found id: ""
	I0920 19:09:10.833969  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.833979  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:10.833985  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:10.834057  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:10.867714  303486 cri.go:89] found id: ""
	I0920 19:09:10.867752  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.867763  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:10.867771  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:10.867840  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:10.903026  303486 cri.go:89] found id: ""
	I0920 19:09:10.903060  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.903068  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:10.903075  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:10.903131  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:10.936968  303486 cri.go:89] found id: ""
	I0920 19:09:10.937002  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.937013  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:10.937021  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:10.937089  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:10.973055  303486 cri.go:89] found id: ""
	I0920 19:09:10.973079  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.973087  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:10.973093  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:10.973145  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:11.010283  303486 cri.go:89] found id: ""
	I0920 19:09:11.010310  303486 logs.go:276] 0 containers: []
	W0920 19:09:11.010321  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:11.010333  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:11.010352  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:11.025202  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:11.025239  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:11.104268  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:11.104295  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:11.104312  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:11.182281  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:11.182326  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:11.219296  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:11.219335  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:13.767833  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:13.780805  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:13.780890  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:13.822288  303486 cri.go:89] found id: ""
	I0920 19:09:13.822317  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.822327  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:13.822334  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:13.822388  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:13.862068  303486 cri.go:89] found id: ""
	I0920 19:09:13.862098  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.862106  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:13.862112  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:13.862163  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:13.898497  303486 cri.go:89] found id: ""
	I0920 19:09:13.898529  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.898540  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:13.898550  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:13.898618  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:13.935994  303486 cri.go:89] found id: ""
	I0920 19:09:13.936022  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.936030  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:13.936038  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:13.936105  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:10.277863  302869 pod_ready.go:82] duration metric: took 4m0.000569658s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" ...
	E0920 19:09:10.277919  302869 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 19:09:10.277965  302869 pod_ready.go:39] duration metric: took 4m13.052343801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:10.278003  302869 kubeadm.go:597] duration metric: took 4m21.10965758s to restartPrimaryControlPlane
	W0920 19:09:10.278125  302869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:09:10.278168  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:09:13.462752  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:15.962371  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:14.605565  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:17.112026  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:13.973764  303486 cri.go:89] found id: ""
	I0920 19:09:13.973801  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.973812  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:13.973820  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:13.973898  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:14.009443  303486 cri.go:89] found id: ""
	I0920 19:09:14.009482  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.009494  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:14.009502  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:14.009577  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:14.045593  303486 cri.go:89] found id: ""
	I0920 19:09:14.045629  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.045639  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:14.045648  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:14.045714  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:14.086273  303486 cri.go:89] found id: ""
	I0920 19:09:14.086310  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.086319  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:14.086330  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:14.086343  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:14.140730  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:14.140772  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:14.154198  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:14.154232  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:14.224716  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:14.224739  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:14.224754  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:14.302625  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:14.302665  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:16.840816  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:16.854905  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:16.855002  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:16.892994  303486 cri.go:89] found id: ""
	I0920 19:09:16.893028  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.893038  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:16.893045  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:16.893103  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:16.931265  303486 cri.go:89] found id: ""
	I0920 19:09:16.931293  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.931307  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:16.931313  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:16.931364  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:16.970085  303486 cri.go:89] found id: ""
	I0920 19:09:16.970119  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.970129  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:16.970138  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:16.970189  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:17.003163  303486 cri.go:89] found id: ""
	I0920 19:09:17.003194  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.003206  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:17.003214  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:17.003282  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:17.040577  303486 cri.go:89] found id: ""
	I0920 19:09:17.040618  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.040633  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:17.040640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:17.040706  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:17.073946  303486 cri.go:89] found id: ""
	I0920 19:09:17.073986  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.073995  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:17.074006  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:17.074066  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:17.111569  303486 cri.go:89] found id: ""
	I0920 19:09:17.111636  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.111648  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:17.111664  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:17.111730  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:17.148005  303486 cri.go:89] found id: ""
	I0920 19:09:17.148034  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.148044  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:17.148056  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:17.148072  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:17.222281  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:17.222306  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:17.222324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:17.297577  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:17.297619  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:17.334709  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:17.334740  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:17.386279  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:17.386320  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:17.962802  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.963289  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.605813  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:22.105024  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.901017  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:19.914489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:19.914571  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:19.955023  303486 cri.go:89] found id: ""
	I0920 19:09:19.955051  303486 logs.go:276] 0 containers: []
	W0920 19:09:19.955060  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:19.955067  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:19.955125  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:19.995536  303486 cri.go:89] found id: ""
	I0920 19:09:19.995575  303486 logs.go:276] 0 containers: []
	W0920 19:09:19.995585  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:19.995594  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:19.995650  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:20.031153  303486 cri.go:89] found id: ""
	I0920 19:09:20.031181  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.031190  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:20.031198  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:20.031266  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:20.064145  303486 cri.go:89] found id: ""
	I0920 19:09:20.064174  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.064190  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:20.064199  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:20.064256  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:20.098399  303486 cri.go:89] found id: ""
	I0920 19:09:20.098429  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.098440  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:20.098449  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:20.098505  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:20.138805  303486 cri.go:89] found id: ""
	I0920 19:09:20.138833  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.138843  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:20.138852  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:20.138914  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:20.183291  303486 cri.go:89] found id: ""
	I0920 19:09:20.183322  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.183333  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:20.183342  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:20.183406  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:20.220344  303486 cri.go:89] found id: ""
	I0920 19:09:20.220378  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.220396  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:20.220409  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:20.220426  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:20.271043  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:20.271086  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:20.286724  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:20.286754  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:20.358233  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:20.358273  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:20.358291  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:20.439511  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:20.439568  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:22.982570  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:22.995384  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:22.995475  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:23.029031  303486 cri.go:89] found id: ""
	I0920 19:09:23.029069  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.029081  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:23.029091  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:23.029166  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:23.063291  303486 cri.go:89] found id: ""
	I0920 19:09:23.063325  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.063336  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:23.063343  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:23.063413  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:23.097494  303486 cri.go:89] found id: ""
	I0920 19:09:23.097525  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.097536  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:23.097545  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:23.097610  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:23.132169  303486 cri.go:89] found id: ""
	I0920 19:09:23.132197  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.132204  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:23.132211  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:23.132276  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:23.173651  303486 cri.go:89] found id: ""
	I0920 19:09:23.173682  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.173692  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:23.173700  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:23.173763  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:23.206098  303486 cri.go:89] found id: ""
	I0920 19:09:23.206135  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.206146  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:23.206155  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:23.206216  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:23.245422  303486 cri.go:89] found id: ""
	I0920 19:09:23.245466  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.245479  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:23.245489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:23.245569  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:23.280326  303486 cri.go:89] found id: ""
	I0920 19:09:23.280357  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.280365  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:23.280376  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:23.280390  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:23.330986  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:23.331034  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:23.344751  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:23.344788  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:23.420213  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:23.420239  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:23.420255  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:23.500449  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:23.500491  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:22.462590  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:24.962516  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:24.105502  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:26.110930  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:26.040050  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:26.056377  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:26.056463  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:26.094122  303486 cri.go:89] found id: ""
	I0920 19:09:26.094160  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.094170  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:26.094179  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:26.094246  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:26.129383  303486 cri.go:89] found id: ""
	I0920 19:09:26.129408  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.129415  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:26.129422  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:26.129472  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:26.163579  303486 cri.go:89] found id: ""
	I0920 19:09:26.163611  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.163621  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:26.163630  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:26.163699  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:26.208026  303486 cri.go:89] found id: ""
	I0920 19:09:26.208057  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.208065  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:26.208071  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:26.208138  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:26.245375  303486 cri.go:89] found id: ""
	I0920 19:09:26.245409  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.245421  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:26.245438  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:26.245500  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:26.280283  303486 cri.go:89] found id: ""
	I0920 19:09:26.280315  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.280326  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:26.280336  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:26.280397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:26.314621  303486 cri.go:89] found id: ""
	I0920 19:09:26.314657  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.314670  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:26.314679  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:26.314773  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:26.347667  303486 cri.go:89] found id: ""
	I0920 19:09:26.347694  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.347701  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:26.347711  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:26.347722  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:26.397221  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:26.397259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:26.411126  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:26.411157  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:26.479631  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:26.479657  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:26.479686  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:26.555439  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:26.555477  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:26.962845  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:28.963560  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:28.605949  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:30.612349  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:32.104187  303063 pod_ready.go:82] duration metric: took 4m0.005608637s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	E0920 19:09:32.104213  303063 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 19:09:32.104224  303063 pod_ready.go:39] duration metric: took 4m5.679030104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:32.104241  303063 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:09:32.104273  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:32.104327  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:32.151755  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:32.151778  303063 cri.go:89] found id: ""
	I0920 19:09:32.151787  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:32.151866  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.157358  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:32.157426  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:32.201227  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:32.201255  303063 cri.go:89] found id: ""
	I0920 19:09:32.201263  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:32.201327  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.206508  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:32.206604  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:32.243509  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:32.243533  303063 cri.go:89] found id: ""
	I0920 19:09:32.243542  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:32.243595  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.247764  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:32.247836  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:32.283590  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:32.283627  303063 cri.go:89] found id: ""
	I0920 19:09:32.283637  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:32.283727  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.287826  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:32.287893  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:32.329071  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:32.329111  303063 cri.go:89] found id: ""
	I0920 19:09:32.329123  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:32.329196  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.333152  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:32.333236  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:32.372444  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:32.372474  303063 cri.go:89] found id: ""
	I0920 19:09:32.372485  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:32.372548  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.376414  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:32.376494  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:32.412244  303063 cri.go:89] found id: ""
	I0920 19:09:32.412280  303063 logs.go:276] 0 containers: []
	W0920 19:09:32.412291  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:32.412299  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:32.412352  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:32.449451  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:32.449472  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:32.449477  303063 cri.go:89] found id: ""
	I0920 19:09:32.449485  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:32.449544  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.454960  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.459688  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:32.459720  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:32.599208  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:32.599241  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:32.656960  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:32.657000  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:32.703259  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:32.703308  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:32.769218  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:32.769260  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:29.096877  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:29.110081  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:29.110170  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:29.152570  303486 cri.go:89] found id: ""
	I0920 19:09:29.152598  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.152608  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:29.152616  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:29.152689  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:29.188596  303486 cri.go:89] found id: ""
	I0920 19:09:29.188627  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.188638  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:29.188645  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:29.188713  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:29.228789  303486 cri.go:89] found id: ""
	I0920 19:09:29.228831  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.228841  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:29.228850  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:29.228913  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:29.260013  303486 cri.go:89] found id: ""
	I0920 19:09:29.260040  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.260048  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:29.260054  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:29.260105  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:29.293373  303486 cri.go:89] found id: ""
	I0920 19:09:29.293401  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.293411  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:29.293418  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:29.293487  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:29.325860  303486 cri.go:89] found id: ""
	I0920 19:09:29.325898  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.325925  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:29.325935  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:29.326027  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:29.358873  303486 cri.go:89] found id: ""
	I0920 19:09:29.358909  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.358921  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:29.358930  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:29.358994  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:29.392029  303486 cri.go:89] found id: ""
	I0920 19:09:29.392057  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.392067  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:29.392080  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:29.392095  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:29.467460  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:29.467504  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:29.508258  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:29.508298  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:29.559238  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:29.559274  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:29.574233  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:29.574264  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:29.649318  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:32.150539  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:32.168442  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:32.168527  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:32.210069  303486 cri.go:89] found id: ""
	I0920 19:09:32.210103  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.210120  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:32.210129  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:32.210191  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:32.243468  303486 cri.go:89] found id: ""
	I0920 19:09:32.243501  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.243511  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:32.243519  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:32.243586  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:32.275958  303486 cri.go:89] found id: ""
	I0920 19:09:32.275988  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.275996  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:32.276003  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:32.276056  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:32.312560  303486 cri.go:89] found id: ""
	I0920 19:09:32.312598  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.312609  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:32.312620  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:32.312695  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:32.347157  303486 cri.go:89] found id: ""
	I0920 19:09:32.347185  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.347193  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:32.347200  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:32.347264  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:32.382787  303486 cri.go:89] found id: ""
	I0920 19:09:32.382820  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.382832  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:32.382841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:32.382898  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:32.416182  303486 cri.go:89] found id: ""
	I0920 19:09:32.416216  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.416226  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:32.416234  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:32.416297  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:32.448863  303486 cri.go:89] found id: ""
	I0920 19:09:32.448895  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.448906  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:32.448919  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:32.448934  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:32.501882  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:32.501934  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:32.517984  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:32.518014  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:32.588517  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:32.588547  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:32.588560  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:32.671869  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:32.671921  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:35.211780  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:35.225476  303486 kubeadm.go:597] duration metric: took 4m2.827297435s to restartPrimaryControlPlane
	W0920 19:09:35.225582  303486 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:09:35.225618  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:09:35.686956  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:35.701803  303486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:09:35.712572  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:09:35.722867  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:09:35.722894  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:09:35.722948  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:09:35.732295  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:09:35.732358  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:09:35.741569  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:09:35.750515  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:09:35.750577  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:09:35.760469  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:09:35.770207  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:09:35.770284  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:09:35.780121  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:09:35.789887  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:09:35.789974  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:09:35.800914  303486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:09:35.871635  303486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:09:35.871691  303486 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:09:36.021411  303486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:09:36.021565  303486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:09:36.021773  303486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:09:36.217540  303486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:09:31.462557  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:33.463284  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:35.964501  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:36.723149  302869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.444941441s)
	I0920 19:09:36.723244  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:36.740763  302869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:09:36.751727  302869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:09:36.762710  302869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:09:36.762736  302869 kubeadm.go:157] found existing configuration files:
	
	I0920 19:09:36.762793  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:09:36.773454  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:09:36.773536  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:09:36.784738  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:09:36.794740  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:09:36.794818  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:09:36.805727  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:09:36.818253  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:09:36.818329  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:09:36.831210  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:09:36.842838  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:09:36.842914  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:09:36.853306  302869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:09:36.903121  302869 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:09:36.903285  302869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:09:37.025789  302869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:09:37.025969  302869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:09:37.026110  302869 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:09:37.034613  302869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:09:36.219542  303486 out.go:235]   - Generating certificates and keys ...
	I0920 19:09:36.219684  303486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:09:36.219769  303486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:09:36.219892  303486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:09:36.219973  303486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:09:36.220090  303486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:09:36.220181  303486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:09:36.220302  303486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:09:36.220414  303486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:09:36.220530  303486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:09:36.220626  303486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:09:36.220691  303486 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:09:36.220767  303486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:09:36.377012  303486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:09:36.706154  303486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:09:36.907341  303486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:09:37.091990  303486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:09:37.122813  303486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:09:37.124422  303486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:09:37.124531  303486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:09:37.277461  303486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:09:33.294289  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:33.294346  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:33.362317  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:33.362364  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:33.375712  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:33.375747  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:33.411136  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:33.411168  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:33.445649  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:33.445690  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:33.478869  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:33.478898  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:33.529433  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:33.529480  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:33.570515  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:33.570560  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.107490  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:36.124979  303063 api_server.go:72] duration metric: took 4m17.429642296s to wait for apiserver process to appear ...
	I0920 19:09:36.125014  303063 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:09:36.125069  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:36.125145  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:36.181962  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:36.181990  303063 cri.go:89] found id: ""
	I0920 19:09:36.182001  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:36.182061  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.186792  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:36.186876  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:36.235963  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:36.235993  303063 cri.go:89] found id: ""
	I0920 19:09:36.236003  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:36.236066  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.241177  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:36.241321  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:36.288324  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.288353  303063 cri.go:89] found id: ""
	I0920 19:09:36.288361  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:36.288415  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.293328  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:36.293413  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:36.335126  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:36.335153  303063 cri.go:89] found id: ""
	I0920 19:09:36.335163  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:36.335226  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.339400  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:36.339470  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:36.375555  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:36.375582  303063 cri.go:89] found id: ""
	I0920 19:09:36.375592  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:36.375657  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.379679  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:36.379753  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:36.415398  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:36.415424  303063 cri.go:89] found id: ""
	I0920 19:09:36.415434  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:36.415495  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.420183  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:36.420260  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:36.462018  303063 cri.go:89] found id: ""
	I0920 19:09:36.462049  303063 logs.go:276] 0 containers: []
	W0920 19:09:36.462060  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:36.462068  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:36.462129  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:36.515520  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:36.515551  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:36.515557  303063 cri.go:89] found id: ""
	I0920 19:09:36.515567  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:36.515628  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.520140  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.524197  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:36.524222  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:36.589535  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:36.589570  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.628836  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:36.628865  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:36.667614  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:36.667654  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:37.164164  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:37.164222  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:37.253505  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:37.253550  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:37.272704  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:37.272742  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:37.315827  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:37.315869  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:37.360449  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:37.360479  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:37.428225  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:37.428270  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:37.469766  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:37.469795  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:37.524517  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:37.524553  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:37.652128  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:37.652162  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:37.036846  302869 out.go:235]   - Generating certificates and keys ...
	I0920 19:09:37.036956  302869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:09:37.037061  302869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:09:37.037194  302869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:09:37.037284  302869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:09:37.037386  302869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:09:37.037462  302869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:09:37.037546  302869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:09:37.037635  302869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:09:37.037734  302869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:09:37.037847  302869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:09:37.037918  302869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:09:37.037995  302869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:09:37.116270  302869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:09:37.615537  302869 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:09:37.907479  302869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:09:38.090167  302869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:09:38.209430  302869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:09:38.209780  302869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:09:38.212626  302869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:09:37.279714  303486 out.go:235]   - Booting up control plane ...
	I0920 19:09:37.279861  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:09:37.288448  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:09:37.289724  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:09:37.290822  303486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:09:37.294106  303486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:09:38.214873  302869 out.go:235]   - Booting up control plane ...
	I0920 19:09:38.214994  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:09:38.215102  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:09:38.215199  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:09:38.232798  302869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:09:38.238716  302869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:09:38.238784  302869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:09:38.370841  302869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:09:38.371037  302869 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:09:38.463252  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:40.463322  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:40.212781  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:09:40.217868  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 200:
	ok
	I0920 19:09:40.219021  303063 api_server.go:141] control plane version: v1.31.1
	I0920 19:09:40.219044  303063 api_server.go:131] duration metric: took 4.094023157s to wait for apiserver health ...
	I0920 19:09:40.219053  303063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:09:40.219077  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:40.219128  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:40.264337  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:40.264365  303063 cri.go:89] found id: ""
	I0920 19:09:40.264376  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:40.264434  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.270143  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:40.270222  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:40.321696  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:40.321723  303063 cri.go:89] found id: ""
	I0920 19:09:40.321733  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:40.321799  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.329068  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:40.329149  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:40.387241  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:40.387329  303063 cri.go:89] found id: ""
	I0920 19:09:40.387357  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:40.387427  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.392896  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:40.392975  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:40.429173  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:40.429200  303063 cri.go:89] found id: ""
	I0920 19:09:40.429210  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:40.429284  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.434102  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:40.434179  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:40.480569  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:40.480598  303063 cri.go:89] found id: ""
	I0920 19:09:40.480607  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:40.480669  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.485821  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:40.485935  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:40.531502  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:40.531543  303063 cri.go:89] found id: ""
	I0920 19:09:40.531554  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:40.531613  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.535699  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:40.535769  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:40.569788  303063 cri.go:89] found id: ""
	I0920 19:09:40.569823  303063 logs.go:276] 0 containers: []
	W0920 19:09:40.569835  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:40.569842  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:40.569928  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:40.604668  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:40.604703  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:40.604710  303063 cri.go:89] found id: ""
	I0920 19:09:40.604721  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:40.604790  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.608948  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.613331  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:40.613360  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:40.657680  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:40.657726  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:40.698087  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:40.698125  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:40.753643  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:40.753683  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:40.791741  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:40.791790  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:41.176451  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:41.176497  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:41.226352  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:41.226386  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:41.307652  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:41.307694  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:41.323271  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:41.323307  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:41.441151  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:41.441195  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:41.495438  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:41.495494  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:41.543879  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:41.543930  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:41.595010  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:41.595055  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:44.140048  303063 system_pods.go:59] 8 kube-system pods found
	I0920 19:09:44.140078  303063 system_pods.go:61] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running
	I0920 19:09:44.140083  303063 system_pods.go:61] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running
	I0920 19:09:44.140087  303063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running
	I0920 19:09:44.140091  303063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running
	I0920 19:09:44.140094  303063 system_pods.go:61] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running
	I0920 19:09:44.140097  303063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running
	I0920 19:09:44.140104  303063 system_pods.go:61] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:44.140108  303063 system_pods.go:61] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running
	I0920 19:09:44.140115  303063 system_pods.go:74] duration metric: took 3.921056539s to wait for pod list to return data ...
	I0920 19:09:44.140122  303063 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:09:44.143381  303063 default_sa.go:45] found service account: "default"
	I0920 19:09:44.143409  303063 default_sa.go:55] duration metric: took 3.281031ms for default service account to be created ...
	I0920 19:09:44.143422  303063 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:09:44.148161  303063 system_pods.go:86] 8 kube-system pods found
	I0920 19:09:44.148191  303063 system_pods.go:89] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running
	I0920 19:09:44.148199  303063 system_pods.go:89] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running
	I0920 19:09:44.148205  303063 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running
	I0920 19:09:44.148212  303063 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running
	I0920 19:09:44.148216  303063 system_pods.go:89] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running
	I0920 19:09:44.148221  303063 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running
	I0920 19:09:44.148230  303063 system_pods.go:89] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:44.148236  303063 system_pods.go:89] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running
	I0920 19:09:44.148248  303063 system_pods.go:126] duration metric: took 4.819429ms to wait for k8s-apps to be running ...
	I0920 19:09:44.148260  303063 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:09:44.148312  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:44.163839  303063 system_svc.go:56] duration metric: took 15.568956ms WaitForService to wait for kubelet
	I0920 19:09:44.163882  303063 kubeadm.go:582] duration metric: took 4m25.468555427s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:09:44.163911  303063 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:09:44.167622  303063 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:09:44.167656  303063 node_conditions.go:123] node cpu capacity is 2
	I0920 19:09:44.167671  303063 node_conditions.go:105] duration metric: took 3.752828ms to run NodePressure ...
	I0920 19:09:44.167690  303063 start.go:241] waiting for startup goroutines ...
	I0920 19:09:44.167700  303063 start.go:246] waiting for cluster config update ...
	I0920 19:09:44.167716  303063 start.go:255] writing updated cluster config ...
	I0920 19:09:44.168208  303063 ssh_runner.go:195] Run: rm -f paused
	I0920 19:09:44.223860  303063 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:09:44.226056  303063 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-612312" cluster and "default" namespace by default
	I0920 19:09:39.373109  302869 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002236347s
	I0920 19:09:39.373229  302869 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:09:44.375102  302869 kubeadm.go:310] [api-check] The API server is healthy after 5.001998039s
	I0920 19:09:44.405405  302869 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:09:44.428364  302869 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:09:44.470575  302869 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:09:44.470870  302869 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-339897 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:09:44.505469  302869 kubeadm.go:310] [bootstrap-token] Using token: v5zzut.gmtb3j9b0yqqwvtv
	I0920 19:09:44.507561  302869 out.go:235]   - Configuring RBAC rules ...
	I0920 19:09:44.507721  302869 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:09:44.522092  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:09:44.555238  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:09:44.559971  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:09:44.566954  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:09:44.574111  302869 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:09:44.788900  302869 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:09:45.229897  302869 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:09:45.788397  302869 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:09:45.789415  302869 kubeadm.go:310] 
	I0920 19:09:45.789504  302869 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:09:45.789516  302869 kubeadm.go:310] 
	I0920 19:09:45.789614  302869 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:09:45.789631  302869 kubeadm.go:310] 
	I0920 19:09:45.789664  302869 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:09:45.789804  302869 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:09:45.789897  302869 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:09:45.789930  302869 kubeadm.go:310] 
	I0920 19:09:45.790043  302869 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:09:45.790061  302869 kubeadm.go:310] 
	I0920 19:09:45.790130  302869 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:09:45.790145  302869 kubeadm.go:310] 
	I0920 19:09:45.790203  302869 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:09:45.790269  302869 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:09:45.790330  302869 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:09:45.790337  302869 kubeadm.go:310] 
	I0920 19:09:45.790438  302869 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:09:45.790549  302869 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:09:45.790563  302869 kubeadm.go:310] 
	I0920 19:09:45.790664  302869 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v5zzut.gmtb3j9b0yqqwvtv \
	I0920 19:09:45.790792  302869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 19:09:45.790823  302869 kubeadm.go:310] 	--control-plane 
	I0920 19:09:45.790835  302869 kubeadm.go:310] 
	I0920 19:09:45.790962  302869 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:09:45.790977  302869 kubeadm.go:310] 
	I0920 19:09:45.791045  302869 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v5zzut.gmtb3j9b0yqqwvtv \
	I0920 19:09:45.791164  302869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 19:09:45.792825  302869 kubeadm.go:310] W0920 19:09:36.880654    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:09:45.793122  302869 kubeadm.go:310] W0920 19:09:36.881516    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:09:45.793273  302869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:09:45.793317  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:09:45.793331  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:09:45.795282  302869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:09:42.464639  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:44.464714  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:45.796961  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:09:45.808972  302869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:09:45.831122  302869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:09:45.831174  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:45.831208  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-339897 minikube.k8s.io/updated_at=2024_09_20T19_09_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=embed-certs-339897 minikube.k8s.io/primary=true
	I0920 19:09:46.057677  302869 ops.go:34] apiserver oom_adj: -16
	I0920 19:09:46.057798  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:46.558670  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:47.057876  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:47.558913  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:48.057925  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:48.557985  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:49.057925  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:49.558500  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:50.058507  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:50.198032  302869 kubeadm.go:1113] duration metric: took 4.366908909s to wait for elevateKubeSystemPrivileges
	I0920 19:09:50.198074  302869 kubeadm.go:394] duration metric: took 5m1.087269263s to StartCluster
	I0920 19:09:50.198100  302869 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:09:50.198209  302869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:09:50.200736  302869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:09:50.201068  302869 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:09:50.201327  302869 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:09:50.201393  302869 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:09:50.201482  302869 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-339897"
	I0920 19:09:50.201502  302869 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-339897"
	W0920 19:09:50.201512  302869 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:09:50.201542  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.202007  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202050  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.202261  302869 addons.go:69] Setting default-storageclass=true in profile "embed-certs-339897"
	I0920 19:09:50.202285  302869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-339897"
	I0920 19:09:50.202285  302869 addons.go:69] Setting metrics-server=true in profile "embed-certs-339897"
	I0920 19:09:50.202311  302869 addons.go:234] Setting addon metrics-server=true in "embed-certs-339897"
	W0920 19:09:50.202319  302869 addons.go:243] addon metrics-server should already be in state true
	I0920 19:09:50.202349  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.202688  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202752  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.202755  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202793  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.203329  302869 out.go:177] * Verifying Kubernetes components...
	I0920 19:09:50.204655  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:09:50.224081  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46289
	I0920 19:09:50.224334  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45801
	I0920 19:09:50.224337  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0920 19:09:50.224579  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.224941  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.225039  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.225214  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225231  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.225643  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.225682  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225699  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.225798  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225818  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.226018  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.226080  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.226564  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.226594  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.226777  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.227444  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.227494  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.229747  302869 addons.go:234] Setting addon default-storageclass=true in "embed-certs-339897"
	W0920 19:09:50.229771  302869 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:09:50.229803  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.230208  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.230261  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.243865  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I0920 19:09:50.244292  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.244828  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.244851  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.245080  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0920 19:09:50.245252  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.245714  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.245810  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.246303  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.246323  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.246661  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.246806  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.248050  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.248671  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.250223  302869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:09:50.250319  302869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:09:46.963562  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:48.965266  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:50.250485  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38237
	I0920 19:09:50.250954  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.251418  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.251435  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.251535  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:09:50.251556  302869 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:09:50.251594  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.251680  302869 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:09:50.251693  302869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:09:50.251706  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.251889  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.252452  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.252502  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.255422  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.255692  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.255902  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.255928  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.256372  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.256396  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.256442  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.256663  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.256697  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.256840  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.256868  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.257066  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.257089  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.257268  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.272424  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I0920 19:09:50.273107  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.273729  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.273746  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.274208  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.274402  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.276189  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.276384  302869 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:09:50.276399  302869 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:09:50.276417  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.279319  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.279718  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.279747  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.279850  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.280044  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.280305  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.280481  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.407262  302869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:09:50.455491  302869 node_ready.go:35] waiting up to 6m0s for node "embed-certs-339897" to be "Ready" ...
	I0920 19:09:50.503634  302869 node_ready.go:49] node "embed-certs-339897" has status "Ready":"True"
	I0920 19:09:50.503663  302869 node_ready.go:38] duration metric: took 48.13478ms for node "embed-certs-339897" to be "Ready" ...
	I0920 19:09:50.503672  302869 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:50.532327  302869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:50.589446  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:09:50.589482  302869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:09:50.613277  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:09:50.619161  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:09:50.662197  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:09:50.662232  302869 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:09:50.753073  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:09:50.753106  302869 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:09:50.842679  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:09:51.790932  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171721983s)
	I0920 19:09:51.790997  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791012  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791029  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.177708427s)
	I0920 19:09:51.791073  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791089  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791380  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.791438  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.791444  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.791483  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791380  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.791527  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.791541  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791556  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791416  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.791493  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.793128  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.793159  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.793177  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.793149  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.793148  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.793208  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.820906  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.820939  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.821290  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.821312  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.003182  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.160452395s)
	I0920 19:09:52.003247  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:52.003261  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:52.003593  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:52.003600  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:52.003622  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.003632  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:52.003640  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:52.003985  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:52.004003  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.004017  302869 addons.go:475] Verifying addon metrics-server=true in "embed-certs-339897"
	I0920 19:09:52.006444  302869 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 19:09:52.008313  302869 addons.go:510] duration metric: took 1.806914162s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 19:09:52.539578  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:53.539999  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:53.540026  302869 pod_ready.go:82] duration metric: took 3.007669334s for pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:53.540036  302869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:51.463340  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:53.963461  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:55.547997  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:57.552686  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.552714  302869 pod_ready.go:82] duration metric: took 4.01267227s for pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.552724  302869 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.560885  302869 pod_ready.go:93] pod "etcd-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.560910  302869 pod_ready.go:82] duration metric: took 8.179457ms for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.560919  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.577414  302869 pod_ready.go:93] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.577441  302869 pod_ready.go:82] duration metric: took 16.515029ms for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.577451  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.588547  302869 pod_ready.go:93] pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.588574  302869 pod_ready.go:82] duration metric: took 11.116334ms for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.588583  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-whcbh" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.594919  302869 pod_ready.go:93] pod "kube-proxy-whcbh" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.594942  302869 pod_ready.go:82] duration metric: took 6.35266ms for pod "kube-proxy-whcbh" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.594951  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.943559  302869 pod_ready.go:93] pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.943585  302869 pod_ready.go:82] duration metric: took 348.626555ms for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.943592  302869 pod_ready.go:39] duration metric: took 7.439908161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:57.943609  302869 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:09:57.943662  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:57.959537  302869 api_server.go:72] duration metric: took 7.758426976s to wait for apiserver process to appear ...
	I0920 19:09:57.959567  302869 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:09:57.959594  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:09:57.964316  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 200:
	ok
	I0920 19:09:57.965668  302869 api_server.go:141] control plane version: v1.31.1
	I0920 19:09:57.965690  302869 api_server.go:131] duration metric: took 6.115168ms to wait for apiserver health ...
	I0920 19:09:57.965697  302869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:09:58.148306  302869 system_pods.go:59] 9 kube-system pods found
	I0920 19:09:58.148339  302869 system_pods.go:61] "coredns-7c65d6cfc9-2zlww" [5eb78763-7160-4ae9-80c3-87a82a6dc992] Running
	I0920 19:09:58.148345  302869 system_pods.go:61] "coredns-7c65d6cfc9-7fxdr" [85a441e8-39b0-4623-a7bd-eebbd1574f20] Running
	I0920 19:09:58.148349  302869 system_pods.go:61] "etcd-embed-certs-339897" [150a2276-3896-498e-89f7-44cf4554da69] Running
	I0920 19:09:58.148352  302869 system_pods.go:61] "kube-apiserver-embed-certs-339897" [396520a3-2567-4267-852d-9f9525dd5e01] Running
	I0920 19:09:58.148356  302869 system_pods.go:61] "kube-controller-manager-embed-certs-339897" [7f64ad97-3230-4cf5-92ad-cf58ef88a2b0] Running
	I0920 19:09:58.148359  302869 system_pods.go:61] "kube-proxy-whcbh" [3a2dbb60-1a51-4874-98b8-75d1a35b0512] Running
	I0920 19:09:58.148361  302869 system_pods.go:61] "kube-scheduler-embed-certs-339897" [31214783-f8cf-46c6-a305-fde7692dfc72] Running
	I0920 19:09:58.148367  302869 system_pods.go:61] "metrics-server-6867b74b74-tw9fh" [8366591d-8916-4b9f-be8a-64ddc185f576] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:58.148371  302869 system_pods.go:61] "storage-provisioner" [8bcc482a-6905-436a-8d90-7eee9ba18f8b] Running
	I0920 19:09:58.148381  302869 system_pods.go:74] duration metric: took 182.677921ms to wait for pod list to return data ...
	I0920 19:09:58.148387  302869 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:09:58.344318  302869 default_sa.go:45] found service account: "default"
	I0920 19:09:58.344346  302869 default_sa.go:55] duration metric: took 195.952788ms for default service account to be created ...
	I0920 19:09:58.344357  302869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:09:58.547996  302869 system_pods.go:86] 9 kube-system pods found
	I0920 19:09:58.548034  302869 system_pods.go:89] "coredns-7c65d6cfc9-2zlww" [5eb78763-7160-4ae9-80c3-87a82a6dc992] Running
	I0920 19:09:58.548043  302869 system_pods.go:89] "coredns-7c65d6cfc9-7fxdr" [85a441e8-39b0-4623-a7bd-eebbd1574f20] Running
	I0920 19:09:58.548048  302869 system_pods.go:89] "etcd-embed-certs-339897" [150a2276-3896-498e-89f7-44cf4554da69] Running
	I0920 19:09:58.548054  302869 system_pods.go:89] "kube-apiserver-embed-certs-339897" [396520a3-2567-4267-852d-9f9525dd5e01] Running
	I0920 19:09:58.548060  302869 system_pods.go:89] "kube-controller-manager-embed-certs-339897" [7f64ad97-3230-4cf5-92ad-cf58ef88a2b0] Running
	I0920 19:09:58.548066  302869 system_pods.go:89] "kube-proxy-whcbh" [3a2dbb60-1a51-4874-98b8-75d1a35b0512] Running
	I0920 19:09:58.548070  302869 system_pods.go:89] "kube-scheduler-embed-certs-339897" [31214783-f8cf-46c6-a305-fde7692dfc72] Running
	I0920 19:09:58.548079  302869 system_pods.go:89] "metrics-server-6867b74b74-tw9fh" [8366591d-8916-4b9f-be8a-64ddc185f576] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:58.548085  302869 system_pods.go:89] "storage-provisioner" [8bcc482a-6905-436a-8d90-7eee9ba18f8b] Running
	I0920 19:09:58.548099  302869 system_pods.go:126] duration metric: took 203.735171ms to wait for k8s-apps to be running ...
	I0920 19:09:58.548108  302869 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:09:58.548165  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:58.563235  302869 system_svc.go:56] duration metric: took 15.107997ms WaitForService to wait for kubelet
	I0920 19:09:58.563274  302869 kubeadm.go:582] duration metric: took 8.362165276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:09:58.563299  302869 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:09:58.744093  302869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:09:58.744155  302869 node_conditions.go:123] node cpu capacity is 2
	I0920 19:09:58.744171  302869 node_conditions.go:105] duration metric: took 180.864643ms to run NodePressure ...
	I0920 19:09:58.744186  302869 start.go:241] waiting for startup goroutines ...
	I0920 19:09:58.744196  302869 start.go:246] waiting for cluster config update ...
	I0920 19:09:58.744220  302869 start.go:255] writing updated cluster config ...
	I0920 19:09:58.744526  302869 ssh_runner.go:195] Run: rm -f paused
	I0920 19:09:58.794946  302869 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:09:58.797418  302869 out.go:177] * Done! kubectl is now configured to use "embed-certs-339897" cluster and "default" namespace by default
	I0920 19:09:56.464024  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:58.464282  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:00.963419  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:02.963506  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:04.963804  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:07.463546  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:09.962855  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:11.963447  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:13.964915  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:17.296411  303486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:10:17.296525  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:17.296765  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:16.462968  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:18.963906  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:22.297630  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:22.297923  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:21.463201  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:22.457112  302538 pod_ready.go:82] duration metric: took 4m0.000881628s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" ...
	E0920 19:10:22.457161  302538 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 19:10:22.457180  302538 pod_ready.go:39] duration metric: took 4m14.047738931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:10:22.457208  302538 kubeadm.go:597] duration metric: took 4m21.028566787s to restartPrimaryControlPlane
	W0920 19:10:22.457265  302538 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:10:22.457291  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:10:32.298239  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:32.298525  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:48.632052  302538 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.17473972s)
	I0920 19:10:48.632143  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:10:48.648205  302538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:10:48.658969  302538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:10:48.668954  302538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:10:48.668981  302538 kubeadm.go:157] found existing configuration files:
	
	I0920 19:10:48.669035  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:10:48.678138  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:10:48.678229  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:10:48.687960  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:10:48.697578  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:10:48.697644  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:10:48.707573  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:10:48.717059  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:10:48.717123  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:10:48.727642  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:10:48.737599  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:10:48.737681  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:10:48.749542  302538 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:10:48.795278  302538 kubeadm.go:310] W0920 19:10:48.780113    2961 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:10:48.796096  302538 kubeadm.go:310] W0920 19:10:48.780928    2961 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:10:48.910958  302538 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:10:52.299257  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:52.299561  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:56.716717  302538 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:10:56.716805  302538 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:10:56.716938  302538 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:10:56.717078  302538 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:10:56.717170  302538 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:10:56.717225  302538 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:10:56.719086  302538 out.go:235]   - Generating certificates and keys ...
	I0920 19:10:56.719199  302538 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:10:56.719286  302538 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:10:56.719407  302538 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:10:56.719505  302538 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:10:56.719624  302538 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:10:56.719720  302538 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:10:56.719811  302538 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:10:56.719928  302538 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:10:56.720049  302538 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:10:56.720154  302538 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:10:56.720224  302538 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:10:56.720287  302538 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:10:56.720334  302538 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:10:56.720386  302538 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:10:56.720432  302538 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:10:56.720486  302538 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:10:56.720533  302538 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:10:56.720606  302538 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:10:56.720701  302538 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:10:56.722504  302538 out.go:235]   - Booting up control plane ...
	I0920 19:10:56.722620  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:10:56.722748  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:10:56.722872  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:10:56.723020  302538 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:10:56.723105  302538 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:10:56.723148  302538 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:10:56.723337  302538 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:10:56.723455  302538 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:10:56.723515  302538 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.448196ms
	I0920 19:10:56.723612  302538 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:10:56.723706  302538 kubeadm.go:310] [api-check] The API server is healthy after 5.001495273s
	I0920 19:10:56.723888  302538 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:10:56.724046  302538 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:10:56.724131  302538 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:10:56.724406  302538 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-037711 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:10:56.724464  302538 kubeadm.go:310] [bootstrap-token] Using token: 2hi1gl.ipidz4nvj8gip8th
	I0920 19:10:56.726099  302538 out.go:235]   - Configuring RBAC rules ...
	I0920 19:10:56.726212  302538 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:10:56.726315  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:10:56.726479  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:10:56.726641  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:10:56.726794  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:10:56.726926  302538 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:10:56.727082  302538 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:10:56.727154  302538 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:10:56.727202  302538 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:10:56.727209  302538 kubeadm.go:310] 
	I0920 19:10:56.727261  302538 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:10:56.727267  302538 kubeadm.go:310] 
	I0920 19:10:56.727363  302538 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:10:56.727383  302538 kubeadm.go:310] 
	I0920 19:10:56.727424  302538 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:10:56.727507  302538 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:10:56.727607  302538 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:10:56.727620  302538 kubeadm.go:310] 
	I0920 19:10:56.727699  302538 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:10:56.727712  302538 kubeadm.go:310] 
	I0920 19:10:56.727775  302538 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:10:56.727790  302538 kubeadm.go:310] 
	I0920 19:10:56.727865  302538 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:10:56.727969  302538 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:10:56.728032  302538 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:10:56.728038  302538 kubeadm.go:310] 
	I0920 19:10:56.728106  302538 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:10:56.728171  302538 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:10:56.728177  302538 kubeadm.go:310] 
	I0920 19:10:56.728271  302538 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2hi1gl.ipidz4nvj8gip8th \
	I0920 19:10:56.728406  302538 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 19:10:56.728438  302538 kubeadm.go:310] 	--control-plane 
	I0920 19:10:56.728451  302538 kubeadm.go:310] 
	I0920 19:10:56.728571  302538 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:10:56.728577  302538 kubeadm.go:310] 
	I0920 19:10:56.728675  302538 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2hi1gl.ipidz4nvj8gip8th \
	I0920 19:10:56.728823  302538 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 19:10:56.728837  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:10:56.728843  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:10:56.730851  302538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:10:56.732462  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:10:56.745326  302538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:10:56.764458  302538 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:10:56.764563  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:56.764620  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-037711 minikube.k8s.io/updated_at=2024_09_20T19_10_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=no-preload-037711 minikube.k8s.io/primary=true
	I0920 19:10:56.792026  302538 ops.go:34] apiserver oom_adj: -16
	I0920 19:10:56.976178  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:57.477172  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:57.977076  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:58.476357  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:58.977162  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:59.476924  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:59.976506  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:11:00.080925  302538 kubeadm.go:1113] duration metric: took 3.316440483s to wait for elevateKubeSystemPrivileges
	I0920 19:11:00.080968  302538 kubeadm.go:394] duration metric: took 4m58.701872852s to StartCluster
	I0920 19:11:00.080994  302538 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:11:00.081106  302538 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:11:00.082815  302538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:11:00.083064  302538 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:11:00.083137  302538 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:11:00.083243  302538 addons.go:69] Setting storage-provisioner=true in profile "no-preload-037711"
	I0920 19:11:00.083263  302538 addons.go:234] Setting addon storage-provisioner=true in "no-preload-037711"
	W0920 19:11:00.083272  302538 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:11:00.083263  302538 addons.go:69] Setting default-storageclass=true in profile "no-preload-037711"
	I0920 19:11:00.083299  302538 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-037711"
	I0920 19:11:00.083308  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.083304  302538 addons.go:69] Setting metrics-server=true in profile "no-preload-037711"
	I0920 19:11:00.083342  302538 addons.go:234] Setting addon metrics-server=true in "no-preload-037711"
	W0920 19:11:00.083354  302538 addons.go:243] addon metrics-server should already be in state true
	I0920 19:11:00.083385  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.083315  302538 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:11:00.083667  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083709  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083715  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.083750  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.083864  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083912  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.084969  302538 out.go:177] * Verifying Kubernetes components...
	I0920 19:11:00.086652  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:11:00.102128  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0920 19:11:00.102362  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33853
	I0920 19:11:00.102750  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0920 19:11:00.102879  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103041  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103431  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103635  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.103651  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.103767  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.103783  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.104022  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.104040  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.104042  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104180  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104383  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104394  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.104842  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.104881  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.104927  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.104963  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.107816  302538 addons.go:234] Setting addon default-storageclass=true in "no-preload-037711"
	W0920 19:11:00.107836  302538 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:11:00.107865  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.108193  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.108236  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.121661  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0920 19:11:00.122693  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.123520  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.123642  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.124299  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.124530  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.125624  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I0920 19:11:00.126343  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.126439  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I0920 19:11:00.126868  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.126947  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.127277  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.127302  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.127572  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.127599  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.127646  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.127902  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.128095  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.128318  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.128360  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.129099  302538 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:11:00.129788  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.130688  302538 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:11:00.130713  302538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:11:00.130732  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.131393  302538 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:11:00.132404  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:11:00.132432  302538 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:11:00.132454  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.134112  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.134627  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.134690  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.135041  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.135215  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.135448  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.135550  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.136315  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.136816  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.136849  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.137011  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.137231  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.137409  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.137589  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.166369  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0920 19:11:00.166884  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.167464  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.167483  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.167850  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.168037  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.169668  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.169875  302538 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:11:00.169891  302538 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:11:00.169925  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.172907  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.173383  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.173416  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.173577  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.173820  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.174010  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.174212  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.275468  302538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:11:00.290839  302538 node_ready.go:35] waiting up to 6m0s for node "no-preload-037711" to be "Ready" ...
	I0920 19:11:00.300222  302538 node_ready.go:49] node "no-preload-037711" has status "Ready":"True"
	I0920 19:11:00.300244  302538 node_ready.go:38] duration metric: took 9.368069ms for node "no-preload-037711" to be "Ready" ...
	I0920 19:11:00.300253  302538 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:11:00.306099  302538 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:00.364927  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:11:00.364956  302538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:11:00.382910  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:11:00.392581  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:11:00.392611  302538 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:11:00.404275  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:11:00.442677  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:11:00.442707  302538 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:11:00.500976  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:11:01.337157  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337196  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337169  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337265  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337558  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.337573  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.337600  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337613  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.337641  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337649  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337685  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337702  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.337711  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337720  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337961  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337978  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.338064  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.338114  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.338133  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.395956  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.395989  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.396327  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.396355  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580133  302538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.079115769s)
	I0920 19:11:01.580188  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.580203  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.580548  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.580568  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580578  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.580586  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.580817  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.580842  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580853  302538 addons.go:475] Verifying addon metrics-server=true in "no-preload-037711"
	I0920 19:11:01.582786  302538 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 19:11:01.584283  302538 addons.go:510] duration metric: took 1.501156808s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 19:11:02.314471  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:04.817174  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:07.312399  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:07.812969  302538 pod_ready.go:93] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:07.812999  302538 pod_ready.go:82] duration metric: took 7.506877081s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.813008  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.818172  302538 pod_ready.go:93] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:07.818200  302538 pod_ready.go:82] duration metric: took 5.184579ms for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.818211  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:09.825772  302538 pod_ready.go:103] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:10.325453  302538 pod_ready.go:93] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:10.325479  302538 pod_ready.go:82] duration metric: took 2.507262085s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.325489  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.331181  302538 pod_ready.go:93] pod "kube-scheduler-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:10.331208  302538 pod_ready.go:82] duration metric: took 5.711573ms for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.331216  302538 pod_ready.go:39] duration metric: took 10.030954081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:11:10.331233  302538 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:11:10.331286  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:11:10.348104  302538 api_server.go:72] duration metric: took 10.265008499s to wait for apiserver process to appear ...
	I0920 19:11:10.348135  302538 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:11:10.348157  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:11:10.352242  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0920 19:11:10.353228  302538 api_server.go:141] control plane version: v1.31.1
	I0920 19:11:10.353249  302538 api_server.go:131] duration metric: took 5.107446ms to wait for apiserver health ...
	I0920 19:11:10.353257  302538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:11:10.358560  302538 system_pods.go:59] 9 kube-system pods found
	I0920 19:11:10.358588  302538 system_pods.go:61] "coredns-7c65d6cfc9-gdfh9" [61c6d6d8-62b9-4db3-a3c3-fd0daec82a9f] Running
	I0920 19:11:10.358593  302538 system_pods.go:61] "coredns-7c65d6cfc9-h84nm" [6ada3ba7-1ccd-474b-850b-c00a77dfbb92] Running
	I0920 19:11:10.358597  302538 system_pods.go:61] "etcd-no-preload-037711" [9ace2dcd-0562-46d5-99be-65be4ea053d9] Running
	I0920 19:11:10.358601  302538 system_pods.go:61] "kube-apiserver-no-preload-037711" [1dbfa130-d2dd-420d-a32c-1e82b535c112] Running
	I0920 19:11:10.358604  302538 system_pods.go:61] "kube-controller-manager-no-preload-037711" [56462390-dedd-4281-ac85-2671f7a10cb1] Running
	I0920 19:11:10.358607  302538 system_pods.go:61] "kube-proxy-bvfqh" [2170ef3f-58f0-4d42-9f15-d9c952e0e2ec] Running
	I0920 19:11:10.358610  302538 system_pods.go:61] "kube-scheduler-no-preload-037711" [e996ce53-7ee6-4d1d-bd0b-8188d76966b9] Running
	I0920 19:11:10.358617  302538 system_pods.go:61] "metrics-server-6867b74b74-rpfqm" [ba7c8518-6c3e-4751-a9a5-29c77990a29c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:11:10.358620  302538 system_pods.go:61] "storage-provisioner" [e7f05c0a-c6be-4e68-959e-966c17c9cc5e] Running
	I0920 19:11:10.358629  302538 system_pods.go:74] duration metric: took 5.365343ms to wait for pod list to return data ...
	I0920 19:11:10.358635  302538 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:11:10.361229  302538 default_sa.go:45] found service account: "default"
	I0920 19:11:10.361255  302538 default_sa.go:55] duration metric: took 2.612292ms for default service account to be created ...
	I0920 19:11:10.361264  302538 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:11:10.367188  302538 system_pods.go:86] 9 kube-system pods found
	I0920 19:11:10.367221  302538 system_pods.go:89] "coredns-7c65d6cfc9-gdfh9" [61c6d6d8-62b9-4db3-a3c3-fd0daec82a9f] Running
	I0920 19:11:10.367229  302538 system_pods.go:89] "coredns-7c65d6cfc9-h84nm" [6ada3ba7-1ccd-474b-850b-c00a77dfbb92] Running
	I0920 19:11:10.367235  302538 system_pods.go:89] "etcd-no-preload-037711" [9ace2dcd-0562-46d5-99be-65be4ea053d9] Running
	I0920 19:11:10.367241  302538 system_pods.go:89] "kube-apiserver-no-preload-037711" [1dbfa130-d2dd-420d-a32c-1e82b535c112] Running
	I0920 19:11:10.367248  302538 system_pods.go:89] "kube-controller-manager-no-preload-037711" [56462390-dedd-4281-ac85-2671f7a10cb1] Running
	I0920 19:11:10.367254  302538 system_pods.go:89] "kube-proxy-bvfqh" [2170ef3f-58f0-4d42-9f15-d9c952e0e2ec] Running
	I0920 19:11:10.367260  302538 system_pods.go:89] "kube-scheduler-no-preload-037711" [e996ce53-7ee6-4d1d-bd0b-8188d76966b9] Running
	I0920 19:11:10.367267  302538 system_pods.go:89] "metrics-server-6867b74b74-rpfqm" [ba7c8518-6c3e-4751-a9a5-29c77990a29c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:11:10.367273  302538 system_pods.go:89] "storage-provisioner" [e7f05c0a-c6be-4e68-959e-966c17c9cc5e] Running
	I0920 19:11:10.367283  302538 system_pods.go:126] duration metric: took 6.01247ms to wait for k8s-apps to be running ...
	I0920 19:11:10.367292  302538 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:11:10.367354  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:11:10.381551  302538 system_svc.go:56] duration metric: took 14.250301ms WaitForService to wait for kubelet
	I0920 19:11:10.381582  302538 kubeadm.go:582] duration metric: took 10.298492318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:11:10.381601  302538 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:11:10.385405  302538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:11:10.385442  302538 node_conditions.go:123] node cpu capacity is 2
	I0920 19:11:10.385455  302538 node_conditions.go:105] duration metric: took 3.849463ms to run NodePressure ...
	I0920 19:11:10.385468  302538 start.go:241] waiting for startup goroutines ...
	I0920 19:11:10.385474  302538 start.go:246] waiting for cluster config update ...
	I0920 19:11:10.385485  302538 start.go:255] writing updated cluster config ...
	I0920 19:11:10.385786  302538 ssh_runner.go:195] Run: rm -f paused
	I0920 19:11:10.436362  302538 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:11:10.438538  302538 out.go:177] * Done! kubectl is now configured to use "no-preload-037711" cluster and "default" namespace by default
	I0920 19:11:32.301334  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:11:32.302020  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:11:32.302048  303486 kubeadm.go:310] 
	I0920 19:11:32.302147  303486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:11:32.302252  303486 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:11:32.302279  303486 kubeadm.go:310] 
	I0920 19:11:32.302366  303486 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:11:32.302453  303486 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:11:32.302713  303486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:11:32.302731  303486 kubeadm.go:310] 
	I0920 19:11:32.303023  303486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:11:32.303099  303486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:11:32.303200  303486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:11:32.303232  303486 kubeadm.go:310] 
	I0920 19:11:32.303438  303486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:11:32.303669  303486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:11:32.303699  303486 kubeadm.go:310] 
	I0920 19:11:32.303965  303486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:11:32.304199  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:11:32.304410  303486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:11:32.304577  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:11:32.304624  303486 kubeadm.go:310] 
	I0920 19:11:32.305105  303486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:11:32.305465  303486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:11:32.305655  303486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 19:11:32.305713  303486 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 19:11:32.305758  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:11:32.760742  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:11:32.775675  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:11:32.785785  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:11:32.785806  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:11:32.785854  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:11:32.795133  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:11:32.795210  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:11:32.805681  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:11:32.815299  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:11:32.815362  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:11:32.827215  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:11:32.836597  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:11:32.836682  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:11:32.846621  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:11:32.855610  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:11:32.855675  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:11:32.866824  303486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:11:33.103745  303486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:13:29.101212  303486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:13:29.101347  303486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 19:13:29.103031  303486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:13:29.103142  303486 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:13:29.103216  303486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:13:29.103318  303486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:13:29.103437  303486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:13:29.103507  303486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:13:29.105521  303486 out.go:235]   - Generating certificates and keys ...
	I0920 19:13:29.105622  303486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:13:29.105704  303486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:13:29.105820  303486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:13:29.105955  303486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:13:29.106058  303486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:13:29.106132  303486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:13:29.106219  303486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:13:29.106318  303486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:13:29.106430  303486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:13:29.106548  303486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:13:29.106611  303486 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:13:29.106699  303486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:13:29.106766  303486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:13:29.106844  303486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:13:29.106935  303486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:13:29.107011  303486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:13:29.107117  303486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:13:29.107223  303486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:13:29.107289  303486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:13:29.107376  303486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:13:29.108804  303486 out.go:235]   - Booting up control plane ...
	I0920 19:13:29.108952  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:13:29.109021  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:13:29.109082  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:13:29.109166  303486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:13:29.109313  303486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:13:29.109359  303486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:13:29.109462  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.109630  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.109699  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.109878  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.109966  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110133  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110213  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110382  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110441  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110606  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110616  303486 kubeadm.go:310] 
	I0920 19:13:29.110661  303486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:13:29.110699  303486 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:13:29.110706  303486 kubeadm.go:310] 
	I0920 19:13:29.110739  303486 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:13:29.110769  303486 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:13:29.110866  303486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:13:29.110875  303486 kubeadm.go:310] 
	I0920 19:13:29.110969  303486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:13:29.111003  303486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:13:29.111031  303486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:13:29.111037  303486 kubeadm.go:310] 
	I0920 19:13:29.111141  303486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:13:29.111224  303486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:13:29.111231  303486 kubeadm.go:310] 
	I0920 19:13:29.111327  303486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:13:29.111407  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:13:29.111481  303486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:13:29.111542  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:13:29.111610  303486 kubeadm.go:394] duration metric: took 7m56.768319159s to StartCluster
	I0920 19:13:29.111640  303486 kubeadm.go:310] 
	I0920 19:13:29.111664  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:13:29.111734  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:13:29.157817  303486 cri.go:89] found id: ""
	I0920 19:13:29.157849  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.157859  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:13:29.157867  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:13:29.157950  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:13:29.192130  303486 cri.go:89] found id: ""
	I0920 19:13:29.192164  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.192179  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:13:29.192187  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:13:29.192243  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:13:29.227594  303486 cri.go:89] found id: ""
	I0920 19:13:29.227631  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.227642  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:13:29.227651  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:13:29.227724  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:13:29.261948  303486 cri.go:89] found id: ""
	I0920 19:13:29.261981  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.261996  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:13:29.262004  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:13:29.262072  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:13:29.295148  303486 cri.go:89] found id: ""
	I0920 19:13:29.295181  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.295191  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:13:29.295200  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:13:29.295270  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:13:29.328094  303486 cri.go:89] found id: ""
	I0920 19:13:29.328127  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.328135  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:13:29.328142  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:13:29.328194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:13:29.368830  303486 cri.go:89] found id: ""
	I0920 19:13:29.368870  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.368878  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:13:29.368885  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:13:29.368947  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:13:29.420051  303486 cri.go:89] found id: ""
	I0920 19:13:29.420081  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.420091  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:13:29.420106  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:13:29.420123  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:13:29.498322  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:13:29.498350  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:13:29.498364  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:13:29.601796  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:13:29.601842  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:13:29.644325  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:13:29.644368  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:13:29.692691  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:13:29.692736  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0920 19:13:29.707508  303486 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 19:13:29.707577  303486 out.go:270] * 
	W0920 19:13:29.707646  303486 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:13:29.707664  303486 out.go:270] * 
	W0920 19:13:29.708560  303486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 19:13:29.711313  303486 out.go:201] 
	W0920 19:13:29.712520  303486 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:13:29.712553  303486 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 19:13:29.712576  303486 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 19:13:29.713832  303486 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.364430637Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860155364401542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb5ce875-190b-4354-bf97-c476b92575dd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.365216058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=630f994d-2bc9-4d45-8b41-c8e2e04e28d3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.365273731Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=630f994d-2bc9-4d45-8b41-c8e2e04e28d3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.365315598Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=630f994d-2bc9-4d45-8b41-c8e2e04e28d3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.395897653Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7094e0fd-3f87-49b5-a096-d8e719ae75e7 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.395986159Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7094e0fd-3f87-49b5-a096-d8e719ae75e7 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.397434740Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7528fecd-d139-4a6b-b0f3-2581c19baca0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.397859614Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860155397838980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7528fecd-d139-4a6b-b0f3-2581c19baca0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.398406194Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54a049ac-2e2b-464c-a33c-9531d7953a3a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.398514707Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54a049ac-2e2b-464c-a33c-9531d7953a3a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.398559386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=54a049ac-2e2b-464c-a33c-9531d7953a3a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.431733496Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c0a9b5d-4729-4b85-ab0b-f03ca1b20979 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.431824281Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c0a9b5d-4729-4b85-ab0b-f03ca1b20979 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.433302705Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a50eaa1c-3096-4116-a2c2-20831bc9c350 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.433750971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860155433723941,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a50eaa1c-3096-4116-a2c2-20831bc9c350 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.434264978Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b93a9bd6-6969-4681-9f4f-6f1ea4add511 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.434327315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b93a9bd6-6969-4681-9f4f-6f1ea4add511 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.434364417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b93a9bd6-6969-4681-9f4f-6f1ea4add511 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.466181498Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c774a11f-85e0-489c-8300-988f51fffe39 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.466273435Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c774a11f-85e0-489c-8300-988f51fffe39 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.467402759Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=275ab638-975b-49fb-8118-508f89087ec2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.467873539Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860155467845494,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=275ab638-975b-49fb-8118-508f89087ec2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.468490585Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6a0ecdc-6111-4c83-ba23-653b5d1da721 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.468551965Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6a0ecdc-6111-4c83-ba23-653b5d1da721 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:22:35 old-k8s-version-425599 crio[625]: time="2024-09-20 19:22:35.468599765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b6a0ecdc-6111-4c83-ba23-653b5d1da721 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep20 19:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051564] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038083] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.892466] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.024288] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.561052] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.228283] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.074163] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.093321] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.193588] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.158367] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.273001] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +6.667482] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.066383] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.180794] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[ +11.395339] kauditd_printk_skb: 46 callbacks suppressed
	[Sep20 19:09] systemd-fstab-generator[5034]: Ignoring "noauto" option for root device
	[Sep20 19:11] systemd-fstab-generator[5316]: Ignoring "noauto" option for root device
	[  +0.064997] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:22:35 up 17 min,  0 users,  load average: 0.08, 0.08, 0.05
	Linux old-k8s-version-425599 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6488]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000bf4fc0)
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6488]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6488]: goroutine 162 [select]:
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6488]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000ea1ef0, 0x4f0ac20, 0xc000cf4820, 0x1, 0xc0001000c0)
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6488]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000255180, 0xc0001000c0)
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6488]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6488]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000d30260, 0xc000bed480)
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6488]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6488]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 20 19:22:30 old-k8s-version-425599 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 20 19:22:30 old-k8s-version-425599 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 20 19:22:30 old-k8s-version-425599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Sep 20 19:22:30 old-k8s-version-425599 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 20 19:22:30 old-k8s-version-425599 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6497]: I0920 19:22:30.879061    6497 server.go:416] Version: v1.20.0
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6497]: I0920 19:22:30.879342    6497 server.go:837] Client rotation is on, will bootstrap in background
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6497]: I0920 19:22:30.881422    6497 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6497]: W0920 19:22:30.882504    6497 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 20 19:22:30 old-k8s-version-425599 kubelet[6497]: I0920 19:22:30.882919    6497 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-425599 -n old-k8s-version-425599
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-425599 -n old-k8s-version-425599: exit status 2 (236.689057ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-425599" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (462.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-612312 -n default-k8s-diff-port-612312
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-20 19:26:29.451194837 +0000 UTC m=+6652.111641493
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-612312 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-612312 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.689µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-612312 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612312 -n default-k8s-diff-port-612312
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-612312 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-612312 logs -n 25: (1.200422675s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:57 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-037711             | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-339897            | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-612312  | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-037711                  | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC | 20 Sep 24 19:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-339897                 | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-425599        | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612312       | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-425599             | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| start   | -p newest-cni-398410 --memory=2200 --alsologtostderr   | newest-cni-398410            | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:26 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| addons  | enable metrics-server -p newest-cni-398410             | newest-cni-398410            | jenkins | v1.34.0 | 20 Sep 24 19:26 UTC | 20 Sep 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-398410                                   | newest-cni-398410            | jenkins | v1.34.0 | 20 Sep 24 19:26 UTC | 20 Sep 24 19:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-398410                  | newest-cni-398410            | jenkins | v1.34.0 | 20 Sep 24 19:26 UTC | 20 Sep 24 19:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-398410 --memory=2200 --alsologtostderr   | newest-cni-398410            | jenkins | v1.34.0 | 20 Sep 24 19:26 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 19:26 UTC | 20 Sep 24 19:26 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:26:21
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:26:21.676794  311118 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:26:21.677232  311118 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:26:21.677247  311118 out.go:358] Setting ErrFile to fd 2...
	I0920 19:26:21.677255  311118 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:26:21.677718  311118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 19:26:21.678745  311118 out.go:352] Setting JSON to false
	I0920 19:26:21.679761  311118 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11325,"bootTime":1726849057,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:26:21.679877  311118 start.go:139] virtualization: kvm guest
	I0920 19:26:21.682334  311118 out.go:177] * [newest-cni-398410] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:26:21.684165  311118 notify.go:220] Checking for updates...
	I0920 19:26:21.684185  311118 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 19:26:21.685999  311118 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:26:21.687803  311118 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:26:21.689206  311118 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 19:26:21.691080  311118 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:26:21.692700  311118 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:26:21.694869  311118 config.go:182] Loaded profile config "newest-cni-398410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:26:21.695562  311118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:26:21.695693  311118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:26:21.714173  311118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41745
	I0920 19:26:21.714734  311118 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:26:21.715437  311118 main.go:141] libmachine: Using API Version  1
	I0920 19:26:21.715470  311118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:26:21.715946  311118 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:26:21.716142  311118 main.go:141] libmachine: (newest-cni-398410) Calling .DriverName
	I0920 19:26:21.716408  311118 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:26:21.716738  311118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:26:21.716786  311118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:26:21.735441  311118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44289
	I0920 19:26:21.736020  311118 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:26:21.736563  311118 main.go:141] libmachine: Using API Version  1
	I0920 19:26:21.736581  311118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:26:21.737059  311118 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:26:21.737362  311118 main.go:141] libmachine: (newest-cni-398410) Calling .DriverName
	I0920 19:26:21.782872  311118 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 19:26:21.784379  311118 start.go:297] selected driver: kvm2
	I0920 19:26:21.784401  311118 start.go:901] validating driver "kvm2" against &{Name:newest-cni-398410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-398410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:26:21.784558  311118 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:26:21.785674  311118 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:26:21.785777  311118 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:26:21.803103  311118 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:26:21.803531  311118 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0920 19:26:21.803574  311118 cni.go:84] Creating CNI manager for ""
	I0920 19:26:21.803618  311118 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:26:21.803669  311118 start.go:340] cluster config:
	{Name:newest-cni-398410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-398410 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:26:21.803803  311118 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:26:21.805976  311118 out.go:177] * Starting "newest-cni-398410" primary control-plane node in "newest-cni-398410" cluster
	I0920 19:26:21.807290  311118 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:26:21.807341  311118 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 19:26:21.807354  311118 cache.go:56] Caching tarball of preloaded images
	I0920 19:26:21.807463  311118 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:26:21.807477  311118 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 19:26:21.807630  311118 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/config.json ...
	I0920 19:26:21.807891  311118 start.go:360] acquireMachinesLock for newest-cni-398410: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:26:21.807961  311118 start.go:364] duration metric: took 36.527µs to acquireMachinesLock for "newest-cni-398410"
	I0920 19:26:21.807983  311118 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:26:21.807990  311118 fix.go:54] fixHost starting: 
	I0920 19:26:21.808406  311118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:26:21.808456  311118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:26:21.825898  311118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46591
	I0920 19:26:21.826803  311118 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:26:21.827506  311118 main.go:141] libmachine: Using API Version  1
	I0920 19:26:21.827529  311118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:26:21.828344  311118 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:26:21.828556  311118 main.go:141] libmachine: (newest-cni-398410) Calling .DriverName
	I0920 19:26:21.828758  311118 main.go:141] libmachine: (newest-cni-398410) Calling .GetState
	I0920 19:26:21.830759  311118 fix.go:112] recreateIfNeeded on newest-cni-398410: state=Stopped err=<nil>
	I0920 19:26:21.830825  311118 main.go:141] libmachine: (newest-cni-398410) Calling .DriverName
	W0920 19:26:21.831005  311118 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:26:21.833210  311118 out.go:177] * Restarting existing kvm2 VM for "newest-cni-398410" ...
	I0920 19:26:21.834775  311118 main.go:141] libmachine: (newest-cni-398410) Calling .Start
	I0920 19:26:21.834984  311118 main.go:141] libmachine: (newest-cni-398410) Ensuring networks are active...
	I0920 19:26:21.835789  311118 main.go:141] libmachine: (newest-cni-398410) Ensuring network default is active
	I0920 19:26:22.014942  311118 main.go:141] libmachine: (newest-cni-398410) Ensuring network mk-newest-cni-398410 is active
	I0920 19:26:22.015606  311118 main.go:141] libmachine: (newest-cni-398410) Getting domain xml...
	I0920 19:26:22.565122  311118 main.go:141] libmachine: (newest-cni-398410) Creating domain...
	I0920 19:26:23.854309  311118 main.go:141] libmachine: (newest-cni-398410) Waiting to get IP...
	I0920 19:26:23.855549  311118 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:26:23.856023  311118 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:26:23.856103  311118 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:26:23.855995  311187 retry.go:31] will retry after 276.313441ms: waiting for machine to come up
	I0920 19:26:24.133801  311118 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:26:24.134648  311118 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:26:24.134674  311118 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:26:24.134608  311187 retry.go:31] will retry after 369.65742ms: waiting for machine to come up
	I0920 19:26:24.506375  311118 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:26:24.506850  311118 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:26:24.506878  311118 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:26:24.506802  311187 retry.go:31] will retry after 397.214833ms: waiting for machine to come up
	I0920 19:26:24.905357  311118 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:26:24.905801  311118 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:26:24.905826  311118 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:26:24.905720  311187 retry.go:31] will retry after 585.140317ms: waiting for machine to come up
	I0920 19:26:25.492606  311118 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:26:25.493047  311118 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:26:25.493071  311118 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:26:25.493012  311187 retry.go:31] will retry after 506.554957ms: waiting for machine to come up
	I0920 19:26:26.001142  311118 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:26:26.001936  311118 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:26:26.001965  311118 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:26:26.001829  311187 retry.go:31] will retry after 895.773746ms: waiting for machine to come up
	
	
	==> CRI-O <==
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.107570090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860390107537628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3cf7c7d2-3fc2-4c91-bf6d-9a0e4ee8459c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.108377742Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0837038-ff96-4aef-be2e-9a6cf4ba82eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.108436696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0837038-ff96-4aef-be2e-9a6cf4ba82eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.108698274Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e45816fab0057c66cab7339828ff0d85ee0e168cb3929a33625992f24f4f574a,PodSandboxId:91ed3bd98860188faabda2896009e32422d2b2bddd2ca6e91a66e0f3d802b72b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726859127652897795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07eaf378-3cf0-4ff2-9742-d7fa0a2ef5df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f,PodSandboxId:49793a7d58f56568bdfce8f0ef2fc27d628ac9dc830eb4751ea37df2d70cb7ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859124169687136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-427x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b87f9f-4697-4d76-aed1-3d54720172c6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859117150108869,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726859116458648062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4,PodSandboxId:59d696c615a6136109be7b56bc4b65a45c328d0dee39e0252594e74c8eab66f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726859116424719189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zp8l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fe30e51-ef3f-4448-916a
-8ad75832b207,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862,PodSandboxId:47652a84f58bf0414a4ed6dee54f09aa0fc0b390d0d469df5415a941b6390f4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859112692847849,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ef7803d3c0b8d4bcc4f
cc2c5dc783a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba,PodSandboxId:e3726bea5b79f107a2daee48e8792ee710f3ba45b5908af8cbe2a27e892e2267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859112742677668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e84b82e8bad235e9885f342d9fca6313,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281,PodSandboxId:90cff8f0e4e7881bc8ac4f75ab7c770e5f4aadfd26e6957301b4078fb37856c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859112712192701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2692a2a39fbd70db2aa422a84035
be53,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f,PodSandboxId:9db4832ae0433b55edaff88b1e24188886c84cf4ab2f05e8986d4888f5577a28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859112723372867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2124512131de5a1d81554836ebcef0
52,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0837038-ff96-4aef-be2e-9a6cf4ba82eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.146188292Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c5c4c48-5e1e-44ed-aa1d-0fe4b681a0d2 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.146575746Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c5c4c48-5e1e-44ed-aa1d-0fe4b681a0d2 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.147969886Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f6a9cc6-00ff-4f4e-88d4-562579329b6b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.148375444Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860390148352693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f6a9cc6-00ff-4f4e-88d4-562579329b6b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.148973027Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c2c7193-dd1b-4a57-909d-44690168ac95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.149042316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c2c7193-dd1b-4a57-909d-44690168ac95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.149245874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e45816fab0057c66cab7339828ff0d85ee0e168cb3929a33625992f24f4f574a,PodSandboxId:91ed3bd98860188faabda2896009e32422d2b2bddd2ca6e91a66e0f3d802b72b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726859127652897795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07eaf378-3cf0-4ff2-9742-d7fa0a2ef5df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f,PodSandboxId:49793a7d58f56568bdfce8f0ef2fc27d628ac9dc830eb4751ea37df2d70cb7ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859124169687136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-427x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b87f9f-4697-4d76-aed1-3d54720172c6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859117150108869,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726859116458648062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4,PodSandboxId:59d696c615a6136109be7b56bc4b65a45c328d0dee39e0252594e74c8eab66f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726859116424719189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zp8l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fe30e51-ef3f-4448-916a
-8ad75832b207,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862,PodSandboxId:47652a84f58bf0414a4ed6dee54f09aa0fc0b390d0d469df5415a941b6390f4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859112692847849,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ef7803d3c0b8d4bcc4f
cc2c5dc783a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba,PodSandboxId:e3726bea5b79f107a2daee48e8792ee710f3ba45b5908af8cbe2a27e892e2267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859112742677668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e84b82e8bad235e9885f342d9fca6313,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281,PodSandboxId:90cff8f0e4e7881bc8ac4f75ab7c770e5f4aadfd26e6957301b4078fb37856c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859112712192701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2692a2a39fbd70db2aa422a84035
be53,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f,PodSandboxId:9db4832ae0433b55edaff88b1e24188886c84cf4ab2f05e8986d4888f5577a28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859112723372867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2124512131de5a1d81554836ebcef0
52,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c2c7193-dd1b-4a57-909d-44690168ac95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.192397794Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e890dc56-a162-4112-bb13-eb0d9572c128 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.192521101Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e890dc56-a162-4112-bb13-eb0d9572c128 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.193819823Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=959ebd9a-ad85-48c9-aa7e-45b9b4bcccd5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.194743558Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860390194677405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=959ebd9a-ad85-48c9-aa7e-45b9b4bcccd5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.196181771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53114dc9-b623-41d8-97aa-d92758e0271a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.196243504Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53114dc9-b623-41d8-97aa-d92758e0271a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.196462062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e45816fab0057c66cab7339828ff0d85ee0e168cb3929a33625992f24f4f574a,PodSandboxId:91ed3bd98860188faabda2896009e32422d2b2bddd2ca6e91a66e0f3d802b72b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726859127652897795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07eaf378-3cf0-4ff2-9742-d7fa0a2ef5df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f,PodSandboxId:49793a7d58f56568bdfce8f0ef2fc27d628ac9dc830eb4751ea37df2d70cb7ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859124169687136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-427x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b87f9f-4697-4d76-aed1-3d54720172c6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859117150108869,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726859116458648062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4,PodSandboxId:59d696c615a6136109be7b56bc4b65a45c328d0dee39e0252594e74c8eab66f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726859116424719189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zp8l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fe30e51-ef3f-4448-916a
-8ad75832b207,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862,PodSandboxId:47652a84f58bf0414a4ed6dee54f09aa0fc0b390d0d469df5415a941b6390f4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859112692847849,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ef7803d3c0b8d4bcc4f
cc2c5dc783a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba,PodSandboxId:e3726bea5b79f107a2daee48e8792ee710f3ba45b5908af8cbe2a27e892e2267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859112742677668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e84b82e8bad235e9885f342d9fca6313,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281,PodSandboxId:90cff8f0e4e7881bc8ac4f75ab7c770e5f4aadfd26e6957301b4078fb37856c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859112712192701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2692a2a39fbd70db2aa422a84035
be53,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f,PodSandboxId:9db4832ae0433b55edaff88b1e24188886c84cf4ab2f05e8986d4888f5577a28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859112723372867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2124512131de5a1d81554836ebcef0
52,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=53114dc9-b623-41d8-97aa-d92758e0271a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.230848985Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=14980766-319a-4141-8cc8-42c50447993f name=/runtime.v1.RuntimeService/Version
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.230926282Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14980766-319a-4141-8cc8-42c50447993f name=/runtime.v1.RuntimeService/Version
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.232180300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47b568cf-b496-4797-b0b2-be02f3998c9d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.232717635Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860390232571180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47b568cf-b496-4797-b0b2-be02f3998c9d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.233271813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22af07ed-c4c8-413d-a5fd-e891b22e0076 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.233349553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22af07ed-c4c8-413d-a5fd-e891b22e0076 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:30 default-k8s-diff-port-612312 crio[704]: time="2024-09-20 19:26:30.233555562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e45816fab0057c66cab7339828ff0d85ee0e168cb3929a33625992f24f4f574a,PodSandboxId:91ed3bd98860188faabda2896009e32422d2b2bddd2ca6e91a66e0f3d802b72b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726859127652897795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07eaf378-3cf0-4ff2-9742-d7fa0a2ef5df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f,PodSandboxId:49793a7d58f56568bdfce8f0ef2fc27d628ac9dc830eb4751ea37df2d70cb7ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859124169687136,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-427x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b87f9f-4697-4d76-aed1-3d54720172c6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859117150108869,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85,PodSandboxId:c438af92b37cb0f52656ecdf78d9d8d9a2384ed8ea3c4876440efc8ed818578e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726859116458648062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4,PodSandboxId:59d696c615a6136109be7b56bc4b65a45c328d0dee39e0252594e74c8eab66f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726859116424719189,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zp8l5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fe30e51-ef3f-4448-916a
-8ad75832b207,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862,PodSandboxId:47652a84f58bf0414a4ed6dee54f09aa0fc0b390d0d469df5415a941b6390f4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859112692847849,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ef7803d3c0b8d4bcc4f
cc2c5dc783a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba,PodSandboxId:e3726bea5b79f107a2daee48e8792ee710f3ba45b5908af8cbe2a27e892e2267,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859112742677668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e84b82e8bad235e9885f342d9fca6313,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281,PodSandboxId:90cff8f0e4e7881bc8ac4f75ab7c770e5f4aadfd26e6957301b4078fb37856c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859112712192701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2692a2a39fbd70db2aa422a84035
be53,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f,PodSandboxId:9db4832ae0433b55edaff88b1e24188886c84cf4ab2f05e8986d4888f5577a28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859112723372867,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2124512131de5a1d81554836ebcef0
52,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22af07ed-c4c8-413d-a5fd-e891b22e0076 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e45816fab0057       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   91ed3bd988601       busybox
	88f0364540083       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      21 minutes ago      Running             coredns                   1                   49793a7d58f56       coredns-7c65d6cfc9-427x2
	a77d8a3964187       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   c438af92b37cb       storage-provisioner
	0d20ef881ab96       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   c438af92b37cb       storage-provisioner
	3591419c15d21       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      21 minutes ago      Running             kube-proxy                1                   59d696c615a61       kube-proxy-zp8l5
	9a3d66bde4ebb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      21 minutes ago      Running             kube-controller-manager   1                   e3726bea5b79f       kube-controller-manager-default-k8s-diff-port-612312
	f9971978cdd07       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      21 minutes ago      Running             kube-apiserver            1                   9db4832ae0433       kube-apiserver-default-k8s-diff-port-612312
	5422e85be2062       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   90cff8f0e4e78       etcd-default-k8s-diff-port-612312
	25594230a0a82       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      21 minutes ago      Running             kube-scheduler            1                   47652a84f58bf       kube-scheduler-default-k8s-diff-port-612312
	
	
	==> coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45063 - 25900 "HINFO IN 2430839124469883219.1019100563124711711. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016951084s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-612312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-612312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=default-k8s-diff-port-612312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_57_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:57:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-612312
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:26:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:26:10 +0000   Fri, 20 Sep 2024 18:57:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:26:10 +0000   Fri, 20 Sep 2024 18:57:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:26:10 +0000   Fri, 20 Sep 2024 18:57:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:26:10 +0000   Fri, 20 Sep 2024 19:05:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.230
	  Hostname:    default-k8s-diff-port-612312
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b60cadd04c8448ca885f0a11b869fa62
	  System UUID:                b60cadd0-4c84-48ca-885f-0a11b869fa62
	  Boot ID:                    477db3ab-d4f7-4411-8b51-5bfccc5662b2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-427x2                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-default-k8s-diff-port-612312                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-612312             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-612312    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-zp8l5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-612312             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-2tnqc                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-612312 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-612312 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-612312 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-612312 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-612312 event: Registered Node default-k8s-diff-port-612312 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-612312 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-612312 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-612312 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-612312 event: Registered Node default-k8s-diff-port-612312 in Controller
	
	
	==> dmesg <==
	[Sep20 19:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060784] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037864] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.920706] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.934068] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.600085] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep20 19:05] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.068713] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068845] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.184881] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.116235] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.289531] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[  +4.111901] systemd-fstab-generator[787]: Ignoring "noauto" option for root device
	[  +1.866213] systemd-fstab-generator[908]: Ignoring "noauto" option for root device
	[  +0.064762] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.530202] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.400437] systemd-fstab-generator[1588]: Ignoring "noauto" option for root device
	[  +3.328604] kauditd_printk_skb: 71 callbacks suppressed
	[  +5.591392] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] <==
	{"level":"info","ts":"2024-09-20T19:05:31.536512Z","caller":"traceutil/trace.go:171","msg":"trace[920165841] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-612312; range_end:; response_count:1; response_revision:574; }","duration":"445.869155ms","start":"2024-09-20T19:05:31.090629Z","end":"2024-09-20T19:05:31.536498Z","steps":["trace[920165841] 'agreement among raft nodes before linearized reading'  (duration: 445.419722ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:05:31.536559Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:05:31.090554Z","time spent":"445.994061ms","remote":"127.0.0.1:47782","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":5942,"request content":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-612312\" "}
	{"level":"info","ts":"2024-09-20T19:05:31.834381Z","caller":"traceutil/trace.go:171","msg":"trace[588460150] linearizableReadLoop","detail":"{readStateIndex:608; appliedIndex:607; }","duration":"289.001785ms","start":"2024-09-20T19:05:31.545361Z","end":"2024-09-20T19:05:31.834363Z","steps":["trace[588460150] 'read index received'  (duration: 288.816337ms)","trace[588460150] 'applied index is now lower than readState.Index'  (duration: 184.862µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T19:05:31.834719Z","caller":"traceutil/trace.go:171","msg":"trace[163714795] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"289.691398ms","start":"2024-09-20T19:05:31.545012Z","end":"2024-09-20T19:05:31.834703Z","steps":["trace[163714795] 'process raft request'  (duration: 289.208961ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:05:31.834825Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.411952ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T19:05:31.835475Z","caller":"traceutil/trace.go:171","msg":"trace[405837260] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:575; }","duration":"240.074628ms","start":"2024-09-20T19:05:31.595389Z","end":"2024-09-20T19:05:31.835463Z","steps":["trace[405837260] 'agreement among raft nodes before linearized reading'  (duration: 239.390651ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:05:31.834962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.593975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-612312\" ","response":"range_response_count:1 size:5537"}
	{"level":"info","ts":"2024-09-20T19:05:31.836077Z","caller":"traceutil/trace.go:171","msg":"trace[250580635] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-612312; range_end:; response_count:1; response_revision:575; }","duration":"290.708759ms","start":"2024-09-20T19:05:31.545357Z","end":"2024-09-20T19:05:31.836065Z","steps":["trace[250580635] 'agreement among raft nodes before linearized reading'  (duration: 289.493163ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:05:31.835005Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.204984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2024-09-20T19:05:31.836303Z","caller":"traceutil/trace.go:171","msg":"trace[499307255] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:575; }","duration":"171.498311ms","start":"2024-09-20T19:05:31.664794Z","end":"2024-09-20T19:05:31.836292Z","steps":["trace[499307255] 'agreement among raft nodes before linearized reading'  (duration: 170.185342ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:05:32.057787Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.935365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-612312\" ","response":"range_response_count:1 size:5747"}
	{"level":"info","ts":"2024-09-20T19:05:32.057863Z","caller":"traceutil/trace.go:171","msg":"trace[309258126] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-612312; range_end:; response_count:1; response_revision:575; }","duration":"122.02661ms","start":"2024-09-20T19:05:31.935822Z","end":"2024-09-20T19:05:32.057849Z","steps":["trace[309258126] 'range keys from in-memory index tree'  (duration: 121.81322ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:05:32.318761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.392799ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13914594599670812843 > lease_revoke:<id:411a9210d2f28f6e>","response":"size:28"}
	{"level":"info","ts":"2024-09-20T19:05:32.318855Z","caller":"traceutil/trace.go:171","msg":"trace[1842132406] linearizableReadLoop","detail":"{readStateIndex:609; appliedIndex:608; }","duration":"178.115006ms","start":"2024-09-20T19:05:32.140726Z","end":"2024-09-20T19:05:32.318841Z","steps":["trace[1842132406] 'read index received'  (duration: 48.581891ms)","trace[1842132406] 'applied index is now lower than readState.Index'  (duration: 129.531985ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T19:05:32.319008Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.267305ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-2tnqc\" ","response":"range_response_count:1 size:4396"}
	{"level":"info","ts":"2024-09-20T19:05:32.319031Z","caller":"traceutil/trace.go:171","msg":"trace[1114442545] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-2tnqc; range_end:; response_count:1; response_revision:575; }","duration":"178.303805ms","start":"2024-09-20T19:05:32.140720Z","end":"2024-09-20T19:05:32.319024Z","steps":["trace[1114442545] 'agreement among raft nodes before linearized reading'  (duration: 178.157287ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:15:14.344072Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":828}
	{"level":"info","ts":"2024-09-20T19:15:14.355039Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":828,"took":"10.567977ms","hash":2083178493,"current-db-size-bytes":2584576,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2584576,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-09-20T19:15:14.355101Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2083178493,"revision":828,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T19:20:14.352984Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1071}
	{"level":"info","ts":"2024-09-20T19:20:14.357192Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1071,"took":"3.836443ms","hash":1413822895,"current-db-size-bytes":2584576,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1536000,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-09-20T19:20:14.357232Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1413822895,"revision":1071,"compact-revision":828}
	{"level":"info","ts":"2024-09-20T19:25:14.360942Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1315}
	{"level":"info","ts":"2024-09-20T19:25:14.364332Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1315,"took":"3.105511ms","hash":1215928200,"current-db-size-bytes":2584576,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1515520,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-09-20T19:25:14.364379Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1215928200,"revision":1315,"compact-revision":1071}
	
	
	==> kernel <==
	 19:26:30 up 21 min,  0 users,  load average: 0.04, 0.11, 0.14
	Linux default-k8s-diff-port-612312 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] <==
	I0920 19:23:16.636645       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:23:16.636686       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 19:25:15.637974       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:25:15.638312       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0920 19:25:16.640772       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:25:16.640853       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 19:25:16.640786       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:25:16.640954       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 19:25:16.642231       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:25:16.642299       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 19:26:16.643162       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:26:16.643305       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0920 19:26:16.643162       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:26:16.643366       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0920 19:26:16.644641       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:26:16.644685       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] <==
	E0920 19:21:19.338878       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:21:19.970367       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 19:21:34.091693       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="289.371µs"
	I0920 19:21:46.090043       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="68.33µs"
	E0920 19:21:49.346156       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:21:49.979050       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:22:19.353422       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:22:19.986897       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:22:49.359565       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:22:49.997153       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:23:19.366182       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:23:20.005173       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:23:49.375173       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:23:50.017635       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:24:19.382518       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:24:20.026938       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:24:49.388777       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:24:50.035909       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:25:19.397269       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:25:20.045191       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:25:49.403481       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:25:50.054669       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 19:26:10.572685       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-612312"
	E0920 19:26:19.410342       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:26:20.063322       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 19:05:16.724297       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 19:05:16.740718       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.230"]
	E0920 19:05:16.740803       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:05:16.811167       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 19:05:16.811280       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 19:05:16.811326       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:05:16.819725       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:05:16.820075       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:05:16.820100       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:05:16.821744       1 config.go:199] "Starting service config controller"
	I0920 19:05:16.821800       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:05:16.821847       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:05:16.821854       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:05:16.823470       1 config.go:328] "Starting node config controller"
	I0920 19:05:16.823491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:05:16.922754       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 19:05:16.922819       1 shared_informer.go:320] Caches are synced for service config
	I0920 19:05:16.923746       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] <==
	I0920 19:05:13.773666       1 serving.go:386] Generated self-signed cert in-memory
	W0920 19:05:15.580225       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 19:05:15.580351       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 19:05:15.580381       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 19:05:15.580444       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 19:05:15.633085       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 19:05:15.635645       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:05:15.639770       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 19:05:15.639928       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 19:05:15.639978       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 19:05:15.640014       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 19:05:15.740811       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:25:31 default-k8s-diff-port-612312 kubelet[915]: E0920 19:25:31.073490     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2tnqc" podUID="35ce9a11-e606-41da-84bf-b3c5e9a18245"
	Sep 20 19:25:31 default-k8s-diff-port-612312 kubelet[915]: E0920 19:25:31.347038     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860331346498336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:31 default-k8s-diff-port-612312 kubelet[915]: E0920 19:25:31.347079     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860331346498336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:41 default-k8s-diff-port-612312 kubelet[915]: E0920 19:25:41.349658     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860341348816830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:41 default-k8s-diff-port-612312 kubelet[915]: E0920 19:25:41.350015     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860341348816830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:46 default-k8s-diff-port-612312 kubelet[915]: E0920 19:25:46.073174     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2tnqc" podUID="35ce9a11-e606-41da-84bf-b3c5e9a18245"
	Sep 20 19:25:51 default-k8s-diff-port-612312 kubelet[915]: E0920 19:25:51.352364     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860351352047869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:51 default-k8s-diff-port-612312 kubelet[915]: E0920 19:25:51.352425     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860351352047869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:26:01 default-k8s-diff-port-612312 kubelet[915]: E0920 19:26:01.073481     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2tnqc" podUID="35ce9a11-e606-41da-84bf-b3c5e9a18245"
	Sep 20 19:26:01 default-k8s-diff-port-612312 kubelet[915]: E0920 19:26:01.354309     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860361353863141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:26:01 default-k8s-diff-port-612312 kubelet[915]: E0920 19:26:01.354372     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860361353863141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:26:11 default-k8s-diff-port-612312 kubelet[915]: E0920 19:26:11.087048     915 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 19:26:11 default-k8s-diff-port-612312 kubelet[915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 19:26:11 default-k8s-diff-port-612312 kubelet[915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 19:26:11 default-k8s-diff-port-612312 kubelet[915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 19:26:11 default-k8s-diff-port-612312 kubelet[915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 19:26:11 default-k8s-diff-port-612312 kubelet[915]: E0920 19:26:11.356149     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860371355747316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:26:11 default-k8s-diff-port-612312 kubelet[915]: E0920 19:26:11.356180     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860371355747316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:26:16 default-k8s-diff-port-612312 kubelet[915]: E0920 19:26:16.073741     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2tnqc" podUID="35ce9a11-e606-41da-84bf-b3c5e9a18245"
	Sep 20 19:26:21 default-k8s-diff-port-612312 kubelet[915]: E0920 19:26:21.358270     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860381357817255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:26:21 default-k8s-diff-port-612312 kubelet[915]: E0920 19:26:21.358569     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860381357817255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:26:28 default-k8s-diff-port-612312 kubelet[915]: E0920 19:26:28.091448     915 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 20 19:26:28 default-k8s-diff-port-612312 kubelet[915]: E0920 19:26:28.091881     915 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 20 19:26:28 default-k8s-diff-port-612312 kubelet[915]: E0920 19:26:28.092181     915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlsmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPr
opagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:
nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-2tnqc_kube-system(35ce9a11-e606-41da-84bf-b3c5e9a18245): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 20 19:26:28 default-k8s-diff-port-612312 kubelet[915]: E0920 19:26:28.093414     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-2tnqc" podUID="35ce9a11-e606-41da-84bf-b3c5e9a18245"
	
	
	==> storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] <==
	I0920 19:05:16.613454       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0920 19:05:16.624737       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] <==
	I0920 19:05:17.278651       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 19:05:17.287139       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 19:05:17.287223       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 19:05:34.871916       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 19:05:34.872182       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-612312_fe527229-f2cc-48af-acbb-f24b59897505!
	I0920 19:05:34.872906       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bf5abdb5-77a8-4a50-ac0c-3da169d5f861", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-612312_fe527229-f2cc-48af-acbb-f24b59897505 became leader
	I0920 19:05:34.973220       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-612312_fe527229-f2cc-48af-acbb-f24b59897505!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-612312 -n default-k8s-diff-port-612312
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-612312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-2tnqc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-612312 describe pod metrics-server-6867b74b74-2tnqc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-612312 describe pod metrics-server-6867b74b74-2tnqc: exit status 1 (85.741417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-2tnqc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-612312 describe pod metrics-server-6867b74b74-2tnqc: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (462.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (438.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-339897 -n embed-certs-339897
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-20 19:26:19.592140817 +0000 UTC m=+6642.252587476
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-339897 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-339897 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.611µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-339897 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-339897 -n embed-certs-339897
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-339897 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-339897 logs -n 25: (1.312487559s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-793540 sudo crio                            | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-793540                                      | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-896665 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | disable-driver-mounts-896665                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:57 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-037711             | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-339897            | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-612312  | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-037711                  | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC | 20 Sep 24 19:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-339897                 | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-425599        | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612312       | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-425599             | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| start   | -p newest-cni-398410 --memory=2200 --alsologtostderr   | newest-cni-398410            | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:26 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| addons  | enable metrics-server -p newest-cni-398410             | newest-cni-398410            | jenkins | v1.34.0 | 20 Sep 24 19:26 UTC | 20 Sep 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-398410                                   | newest-cni-398410            | jenkins | v1.34.0 | 20 Sep 24 19:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:25:23
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:25:23.434027  310233 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:25:23.434306  310233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:23.434316  310233 out.go:358] Setting ErrFile to fd 2...
	I0920 19:25:23.434321  310233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:23.434513  310233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 19:25:23.435117  310233 out.go:352] Setting JSON to false
	I0920 19:25:23.436154  310233 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11266,"bootTime":1726849057,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:25:23.436256  310233 start.go:139] virtualization: kvm guest
	I0920 19:25:23.438550  310233 out.go:177] * [newest-cni-398410] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:25:23.440224  310233 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 19:25:23.440225  310233 notify.go:220] Checking for updates...
	I0920 19:25:23.443174  310233 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:25:23.444565  310233 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:25:23.446106  310233 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 19:25:23.447412  310233 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:25:23.448801  310233 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:25:23.450682  310233 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:25:23.450774  310233 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:25:23.450857  310233 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:25:23.450957  310233 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:25:23.491820  310233 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 19:25:23.493346  310233 start.go:297] selected driver: kvm2
	I0920 19:25:23.493366  310233 start.go:901] validating driver "kvm2" against <nil>
	I0920 19:25:23.493379  310233 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:25:23.494160  310233 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:25:23.494260  310233 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:25:23.511406  310233 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:25:23.511493  310233 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0920 19:25:23.511565  310233 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0920 19:25:23.511829  310233 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0920 19:25:23.511866  310233 cni.go:84] Creating CNI manager for ""
	I0920 19:25:23.511930  310233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:25:23.511940  310233 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 19:25:23.512005  310233 start.go:340] cluster config:
	{Name:newest-cni-398410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-398410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:25:23.512137  310233 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:25:23.515010  310233 out.go:177] * Starting "newest-cni-398410" primary control-plane node in "newest-cni-398410" cluster
	I0920 19:25:23.517462  310233 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:23.517507  310233 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 19:25:23.517517  310233 cache.go:56] Caching tarball of preloaded images
	I0920 19:25:23.517630  310233 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:25:23.517645  310233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 19:25:23.517761  310233 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/config.json ...
	I0920 19:25:23.517787  310233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/config.json: {Name:mk1b3d753bbd27adfd710d2d761bbc72d5415fd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:25:23.518015  310233 start.go:360] acquireMachinesLock for newest-cni-398410: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:25:23.518056  310233 start.go:364] duration metric: took 22.418µs to acquireMachinesLock for "newest-cni-398410"
	I0920 19:25:23.518080  310233 start.go:93] Provisioning new machine with config: &{Name:newest-cni-398410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-398410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:25:23.518187  310233 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 19:25:23.521147  310233 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 19:25:23.521321  310233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:25:23.521360  310233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:25:23.537831  310233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35297
	I0920 19:25:23.538411  310233 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:25:23.538998  310233 main.go:141] libmachine: Using API Version  1
	I0920 19:25:23.539021  310233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:25:23.539350  310233 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:25:23.539521  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetMachineName
	I0920 19:25:23.539658  310233 main.go:141] libmachine: (newest-cni-398410) Calling .DriverName
	I0920 19:25:23.539788  310233 start.go:159] libmachine.API.Create for "newest-cni-398410" (driver="kvm2")
	I0920 19:25:23.539814  310233 client.go:168] LocalClient.Create starting
	I0920 19:25:23.539843  310233 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 19:25:23.539876  310233 main.go:141] libmachine: Decoding PEM data...
	I0920 19:25:23.539890  310233 main.go:141] libmachine: Parsing certificate...
	I0920 19:25:23.539947  310233 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 19:25:23.539966  310233 main.go:141] libmachine: Decoding PEM data...
	I0920 19:25:23.539977  310233 main.go:141] libmachine: Parsing certificate...
	I0920 19:25:23.539993  310233 main.go:141] libmachine: Running pre-create checks...
	I0920 19:25:23.540002  310233 main.go:141] libmachine: (newest-cni-398410) Calling .PreCreateCheck
	I0920 19:25:23.540343  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetConfigRaw
	I0920 19:25:23.540721  310233 main.go:141] libmachine: Creating machine...
	I0920 19:25:23.540735  310233 main.go:141] libmachine: (newest-cni-398410) Calling .Create
	I0920 19:25:23.540927  310233 main.go:141] libmachine: (newest-cni-398410) Creating KVM machine...
	I0920 19:25:23.542395  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found existing default KVM network
	I0920 19:25:23.544082  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:23.543911  310257 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000211940}
	I0920 19:25:23.544138  310233 main.go:141] libmachine: (newest-cni-398410) DBG | created network xml: 
	I0920 19:25:23.544159  310233 main.go:141] libmachine: (newest-cni-398410) DBG | <network>
	I0920 19:25:23.544174  310233 main.go:141] libmachine: (newest-cni-398410) DBG |   <name>mk-newest-cni-398410</name>
	I0920 19:25:23.544184  310233 main.go:141] libmachine: (newest-cni-398410) DBG |   <dns enable='no'/>
	I0920 19:25:23.544207  310233 main.go:141] libmachine: (newest-cni-398410) DBG |   
	I0920 19:25:23.544231  310233 main.go:141] libmachine: (newest-cni-398410) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 19:25:23.544246  310233 main.go:141] libmachine: (newest-cni-398410) DBG |     <dhcp>
	I0920 19:25:23.544258  310233 main.go:141] libmachine: (newest-cni-398410) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 19:25:23.544268  310233 main.go:141] libmachine: (newest-cni-398410) DBG |     </dhcp>
	I0920 19:25:23.544282  310233 main.go:141] libmachine: (newest-cni-398410) DBG |   </ip>
	I0920 19:25:23.544294  310233 main.go:141] libmachine: (newest-cni-398410) DBG |   
	I0920 19:25:23.544304  310233 main.go:141] libmachine: (newest-cni-398410) DBG | </network>
	I0920 19:25:23.544315  310233 main.go:141] libmachine: (newest-cni-398410) DBG | 
	I0920 19:25:23.550886  310233 main.go:141] libmachine: (newest-cni-398410) DBG | trying to create private KVM network mk-newest-cni-398410 192.168.39.0/24...
	I0920 19:25:23.636900  310233 main.go:141] libmachine: (newest-cni-398410) DBG | private KVM network mk-newest-cni-398410 192.168.39.0/24 created
	I0920 19:25:23.636940  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:23.636871  310257 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 19:25:23.636952  310233 main.go:141] libmachine: (newest-cni-398410) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410 ...
	I0920 19:25:23.636969  310233 main.go:141] libmachine: (newest-cni-398410) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 19:25:23.637123  310233 main.go:141] libmachine: (newest-cni-398410) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 19:25:23.935820  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:23.935649  310257 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/id_rsa...
	I0920 19:25:24.049232  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:24.049082  310257 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/newest-cni-398410.rawdisk...
	I0920 19:25:24.049279  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Writing magic tar header
	I0920 19:25:24.049331  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Writing SSH key tar header
	I0920 19:25:24.049358  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:24.049202  310257 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410 ...
	I0920 19:25:24.049375  310233 main.go:141] libmachine: (newest-cni-398410) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410 (perms=drwx------)
	I0920 19:25:24.049396  310233 main.go:141] libmachine: (newest-cni-398410) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 19:25:24.049411  310233 main.go:141] libmachine: (newest-cni-398410) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 19:25:24.049424  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410
	I0920 19:25:24.049440  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 19:25:24.049462  310233 main.go:141] libmachine: (newest-cni-398410) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 19:25:24.049475  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 19:25:24.049489  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 19:25:24.049501  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 19:25:24.049514  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Checking permissions on dir: /home/jenkins
	I0920 19:25:24.049525  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Checking permissions on dir: /home
	I0920 19:25:24.049537  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Skipping /home - not owner
	I0920 19:25:24.049551  310233 main.go:141] libmachine: (newest-cni-398410) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 19:25:24.049565  310233 main.go:141] libmachine: (newest-cni-398410) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 19:25:24.049576  310233 main.go:141] libmachine: (newest-cni-398410) Creating domain...
	I0920 19:25:24.050737  310233 main.go:141] libmachine: (newest-cni-398410) define libvirt domain using xml: 
	I0920 19:25:24.050763  310233 main.go:141] libmachine: (newest-cni-398410) <domain type='kvm'>
	I0920 19:25:24.050779  310233 main.go:141] libmachine: (newest-cni-398410)   <name>newest-cni-398410</name>
	I0920 19:25:24.050792  310233 main.go:141] libmachine: (newest-cni-398410)   <memory unit='MiB'>2200</memory>
	I0920 19:25:24.050804  310233 main.go:141] libmachine: (newest-cni-398410)   <vcpu>2</vcpu>
	I0920 19:25:24.050808  310233 main.go:141] libmachine: (newest-cni-398410)   <features>
	I0920 19:25:24.050813  310233 main.go:141] libmachine: (newest-cni-398410)     <acpi/>
	I0920 19:25:24.050817  310233 main.go:141] libmachine: (newest-cni-398410)     <apic/>
	I0920 19:25:24.050822  310233 main.go:141] libmachine: (newest-cni-398410)     <pae/>
	I0920 19:25:24.050827  310233 main.go:141] libmachine: (newest-cni-398410)     
	I0920 19:25:24.050835  310233 main.go:141] libmachine: (newest-cni-398410)   </features>
	I0920 19:25:24.050839  310233 main.go:141] libmachine: (newest-cni-398410)   <cpu mode='host-passthrough'>
	I0920 19:25:24.050846  310233 main.go:141] libmachine: (newest-cni-398410)   
	I0920 19:25:24.050850  310233 main.go:141] libmachine: (newest-cni-398410)   </cpu>
	I0920 19:25:24.050859  310233 main.go:141] libmachine: (newest-cni-398410)   <os>
	I0920 19:25:24.050870  310233 main.go:141] libmachine: (newest-cni-398410)     <type>hvm</type>
	I0920 19:25:24.050882  310233 main.go:141] libmachine: (newest-cni-398410)     <boot dev='cdrom'/>
	I0920 19:25:24.050889  310233 main.go:141] libmachine: (newest-cni-398410)     <boot dev='hd'/>
	I0920 19:25:24.050901  310233 main.go:141] libmachine: (newest-cni-398410)     <bootmenu enable='no'/>
	I0920 19:25:24.050906  310233 main.go:141] libmachine: (newest-cni-398410)   </os>
	I0920 19:25:24.050911  310233 main.go:141] libmachine: (newest-cni-398410)   <devices>
	I0920 19:25:24.050921  310233 main.go:141] libmachine: (newest-cni-398410)     <disk type='file' device='cdrom'>
	I0920 19:25:24.050936  310233 main.go:141] libmachine: (newest-cni-398410)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/boot2docker.iso'/>
	I0920 19:25:24.050952  310233 main.go:141] libmachine: (newest-cni-398410)       <target dev='hdc' bus='scsi'/>
	I0920 19:25:24.050963  310233 main.go:141] libmachine: (newest-cni-398410)       <readonly/>
	I0920 19:25:24.050969  310233 main.go:141] libmachine: (newest-cni-398410)     </disk>
	I0920 19:25:24.050980  310233 main.go:141] libmachine: (newest-cni-398410)     <disk type='file' device='disk'>
	I0920 19:25:24.050988  310233 main.go:141] libmachine: (newest-cni-398410)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 19:25:24.051003  310233 main.go:141] libmachine: (newest-cni-398410)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/newest-cni-398410.rawdisk'/>
	I0920 19:25:24.051014  310233 main.go:141] libmachine: (newest-cni-398410)       <target dev='hda' bus='virtio'/>
	I0920 19:25:24.051039  310233 main.go:141] libmachine: (newest-cni-398410)     </disk>
	I0920 19:25:24.051060  310233 main.go:141] libmachine: (newest-cni-398410)     <interface type='network'>
	I0920 19:25:24.051072  310233 main.go:141] libmachine: (newest-cni-398410)       <source network='mk-newest-cni-398410'/>
	I0920 19:25:24.051082  310233 main.go:141] libmachine: (newest-cni-398410)       <model type='virtio'/>
	I0920 19:25:24.051091  310233 main.go:141] libmachine: (newest-cni-398410)     </interface>
	I0920 19:25:24.051102  310233 main.go:141] libmachine: (newest-cni-398410)     <interface type='network'>
	I0920 19:25:24.051113  310233 main.go:141] libmachine: (newest-cni-398410)       <source network='default'/>
	I0920 19:25:24.051127  310233 main.go:141] libmachine: (newest-cni-398410)       <model type='virtio'/>
	I0920 19:25:24.051153  310233 main.go:141] libmachine: (newest-cni-398410)     </interface>
	I0920 19:25:24.051171  310233 main.go:141] libmachine: (newest-cni-398410)     <serial type='pty'>
	I0920 19:25:24.051178  310233 main.go:141] libmachine: (newest-cni-398410)       <target port='0'/>
	I0920 19:25:24.051184  310233 main.go:141] libmachine: (newest-cni-398410)     </serial>
	I0920 19:25:24.051191  310233 main.go:141] libmachine: (newest-cni-398410)     <console type='pty'>
	I0920 19:25:24.051201  310233 main.go:141] libmachine: (newest-cni-398410)       <target type='serial' port='0'/>
	I0920 19:25:24.051214  310233 main.go:141] libmachine: (newest-cni-398410)     </console>
	I0920 19:25:24.051228  310233 main.go:141] libmachine: (newest-cni-398410)     <rng model='virtio'>
	I0920 19:25:24.051249  310233 main.go:141] libmachine: (newest-cni-398410)       <backend model='random'>/dev/random</backend>
	I0920 19:25:24.051261  310233 main.go:141] libmachine: (newest-cni-398410)     </rng>
	I0920 19:25:24.051272  310233 main.go:141] libmachine: (newest-cni-398410)     
	I0920 19:25:24.051285  310233 main.go:141] libmachine: (newest-cni-398410)     
	I0920 19:25:24.051295  310233 main.go:141] libmachine: (newest-cni-398410)   </devices>
	I0920 19:25:24.051304  310233 main.go:141] libmachine: (newest-cni-398410) </domain>
	I0920 19:25:24.051316  310233 main.go:141] libmachine: (newest-cni-398410) 
	I0920 19:25:24.056742  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:e4:05:7d in network default
	I0920 19:25:24.057305  310233 main.go:141] libmachine: (newest-cni-398410) Ensuring networks are active...
	I0920 19:25:24.057334  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:24.058166  310233 main.go:141] libmachine: (newest-cni-398410) Ensuring network default is active
	I0920 19:25:24.058433  310233 main.go:141] libmachine: (newest-cni-398410) Ensuring network mk-newest-cni-398410 is active
	I0920 19:25:24.059012  310233 main.go:141] libmachine: (newest-cni-398410) Getting domain xml...
	I0920 19:25:24.059921  310233 main.go:141] libmachine: (newest-cni-398410) Creating domain...
	I0920 19:25:25.358942  310233 main.go:141] libmachine: (newest-cni-398410) Waiting to get IP...
	I0920 19:25:25.361061  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:25.361575  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:25.361622  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:25.361538  310257 retry.go:31] will retry after 262.08471ms: waiting for machine to come up
	I0920 19:25:25.625209  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:25.625806  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:25.625834  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:25.625747  310257 retry.go:31] will retry after 389.923077ms: waiting for machine to come up
	I0920 19:25:26.017408  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:26.018045  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:26.018077  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:26.017993  310257 retry.go:31] will retry after 473.133715ms: waiting for machine to come up
	I0920 19:25:26.492258  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:26.492794  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:26.492815  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:26.492752  310257 retry.go:31] will retry after 524.383369ms: waiting for machine to come up
	I0920 19:25:27.018420  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:27.019712  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:27.019739  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:27.019645  310257 retry.go:31] will retry after 504.825618ms: waiting for machine to come up
	I0920 19:25:27.526456  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:27.526998  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:27.527023  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:27.526947  310257 retry.go:31] will retry after 918.804995ms: waiting for machine to come up
	I0920 19:25:28.447039  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:28.447667  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:28.447699  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:28.447618  310257 retry.go:31] will retry after 1.099101438s: waiting for machine to come up
	I0920 19:25:29.548392  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:29.548923  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:29.548954  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:29.548864  310257 retry.go:31] will retry after 1.050526325s: waiting for machine to come up
	I0920 19:25:30.600977  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:30.601540  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:30.601576  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:30.601488  310257 retry.go:31] will retry after 1.698668339s: waiting for machine to come up
	I0920 19:25:32.301339  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:32.301764  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:32.301796  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:32.301708  310257 retry.go:31] will retry after 2.040739428s: waiting for machine to come up
	I0920 19:25:34.344054  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:34.344544  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:34.344568  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:34.344515  310257 retry.go:31] will retry after 2.113534621s: waiting for machine to come up
	I0920 19:25:36.460218  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:36.460672  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:36.460699  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:36.460617  310257 retry.go:31] will retry after 2.700285138s: waiting for machine to come up
	I0920 19:25:39.163142  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:39.163728  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:39.163757  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:39.163673  310257 retry.go:31] will retry after 2.894153028s: waiting for machine to come up
	I0920 19:25:42.059269  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:42.059790  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:42.059822  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:42.059734  310257 retry.go:31] will retry after 4.180679222s: waiting for machine to come up
	I0920 19:25:46.294383  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:46.294950  310233 main.go:141] libmachine: (newest-cni-398410) Found IP for machine: 192.168.39.234
	I0920 19:25:46.294969  310233 main.go:141] libmachine: (newest-cni-398410) Reserving static IP address...
	I0920 19:25:46.294978  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has current primary IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:46.295349  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find host DHCP lease matching {name: "newest-cni-398410", mac: "52:54:00:50:69:77", ip: "192.168.39.234"} in network mk-newest-cni-398410
	I0920 19:25:46.389879  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Getting to WaitForSSH function...
	I0920 19:25:46.389922  310233 main.go:141] libmachine: (newest-cni-398410) Reserved static IP address: 192.168.39.234
	I0920 19:25:46.389935  310233 main.go:141] libmachine: (newest-cni-398410) Waiting for SSH to be available...
	I0920 19:25:46.392742  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:46.393249  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:minikube Clientid:01:52:54:00:50:69:77}
	I0920 19:25:46.393287  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:46.393618  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Using SSH client type: external
	I0920 19:25:46.393644  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/id_rsa (-rw-------)
	I0920 19:25:46.393674  310233 main.go:141] libmachine: (newest-cni-398410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:25:46.393689  310233 main.go:141] libmachine: (newest-cni-398410) DBG | About to run SSH command:
	I0920 19:25:46.393737  310233 main.go:141] libmachine: (newest-cni-398410) DBG | exit 0
	I0920 19:25:46.518935  310233 main.go:141] libmachine: (newest-cni-398410) DBG | SSH cmd err, output: <nil>: 
	I0920 19:25:46.519212  310233 main.go:141] libmachine: (newest-cni-398410) KVM machine creation complete!
	I0920 19:25:46.519531  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetConfigRaw
	I0920 19:25:46.520245  310233 main.go:141] libmachine: (newest-cni-398410) Calling .DriverName
	I0920 19:25:46.520482  310233 main.go:141] libmachine: (newest-cni-398410) Calling .DriverName
	I0920 19:25:46.520667  310233 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 19:25:46.520679  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetState
	I0920 19:25:46.522641  310233 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 19:25:46.522681  310233 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 19:25:46.522690  310233 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 19:25:46.522700  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHHostname
	I0920 19:25:46.525272  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:46.525836  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:46.525879  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:46.526020  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHPort
	I0920 19:25:46.526256  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:46.526494  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:46.526660  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHUsername
	I0920 19:25:46.526865  310233 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:46.527138  310233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0920 19:25:46.527154  310233 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 19:25:46.637577  310233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:25:46.637606  310233 main.go:141] libmachine: Detecting the provisioner...
	I0920 19:25:46.637616  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHHostname
	I0920 19:25:46.640337  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:46.640798  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:46.640827  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:46.641054  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHPort
	I0920 19:25:46.641289  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:46.641459  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:46.641608  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHUsername
	I0920 19:25:46.641796  310233 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:46.642000  310233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0920 19:25:46.642011  310233 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 19:25:46.746605  310233 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 19:25:46.746698  310233 main.go:141] libmachine: found compatible host: buildroot
	I0920 19:25:46.746708  310233 main.go:141] libmachine: Provisioning with buildroot...
	I0920 19:25:46.746716  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetMachineName
	I0920 19:25:46.746995  310233 buildroot.go:166] provisioning hostname "newest-cni-398410"
	I0920 19:25:46.747026  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetMachineName
	I0920 19:25:46.747218  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHHostname
	I0920 19:25:46.750120  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:46.750527  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:46.750557  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:46.750815  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHPort
	I0920 19:25:46.751012  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:46.751204  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:46.751338  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHUsername
	I0920 19:25:46.751509  310233 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:46.751743  310233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0920 19:25:46.751762  310233 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-398410 && echo "newest-cni-398410" | sudo tee /etc/hostname
	I0920 19:25:46.873028  310233 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-398410
	
	I0920 19:25:46.873063  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHHostname
	I0920 19:25:46.875847  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:46.876241  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:46.876268  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:46.876488  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHPort
	I0920 19:25:46.876679  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:46.876867  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:46.876993  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHUsername
	I0920 19:25:46.877156  310233 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:46.877332  310233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0920 19:25:46.877347  310233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-398410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-398410/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-398410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:25:46.994190  310233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:25:46.994226  310233 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:25:46.994256  310233 buildroot.go:174] setting up certificates
	I0920 19:25:46.994268  310233 provision.go:84] configureAuth start
	I0920 19:25:46.994282  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetMachineName
	I0920 19:25:46.994612  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetIP
	I0920 19:25:46.997285  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:46.997728  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:46.997759  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:46.997969  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHHostname
	I0920 19:25:47.000454  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.000803  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:47.000827  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.001003  310233 provision.go:143] copyHostCerts
	I0920 19:25:47.001065  310233 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:25:47.001085  310233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:25:47.001155  310233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:25:47.001262  310233 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:25:47.001272  310233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:25:47.001298  310233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:25:47.001351  310233 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:25:47.001358  310233 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:25:47.001378  310233 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:25:47.001426  310233 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.newest-cni-398410 san=[127.0.0.1 192.168.39.234 localhost minikube newest-cni-398410]
	I0920 19:25:47.304484  310233 provision.go:177] copyRemoteCerts
	I0920 19:25:47.304557  310233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:25:47.304592  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHHostname
	I0920 19:25:47.307713  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.308087  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:47.308120  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.308293  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHPort
	I0920 19:25:47.308535  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:47.308749  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHUsername
	I0920 19:25:47.308895  310233 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/id_rsa Username:docker}
	I0920 19:25:47.392132  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 19:25:47.417291  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:25:47.441287  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 19:25:47.466402  310233 provision.go:87] duration metric: took 472.115724ms to configureAuth
	I0920 19:25:47.466441  310233 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:25:47.466654  310233 config.go:182] Loaded profile config "newest-cni-398410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:25:47.466748  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHHostname
	I0920 19:25:47.469930  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.470369  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:47.470404  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.470625  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHPort
	I0920 19:25:47.470849  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:47.471046  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:47.471198  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHUsername
	I0920 19:25:47.471351  310233 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:47.471567  310233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0920 19:25:47.471590  310233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:25:47.686971  310233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:25:47.687005  310233 main.go:141] libmachine: Checking connection to Docker...
	I0920 19:25:47.687016  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetURL
	I0920 19:25:47.688486  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Using libvirt version 6000000
	I0920 19:25:47.691049  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.691424  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:47.691456  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.691646  310233 main.go:141] libmachine: Docker is up and running!
	I0920 19:25:47.691664  310233 main.go:141] libmachine: Reticulating splines...
	I0920 19:25:47.691674  310233 client.go:171] duration metric: took 24.151849244s to LocalClient.Create
	I0920 19:25:47.691697  310233 start.go:167] duration metric: took 24.151912174s to libmachine.API.Create "newest-cni-398410"
	I0920 19:25:47.691705  310233 start.go:293] postStartSetup for "newest-cni-398410" (driver="kvm2")
	I0920 19:25:47.691715  310233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:25:47.691734  310233 main.go:141] libmachine: (newest-cni-398410) Calling .DriverName
	I0920 19:25:47.692006  310233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:25:47.692031  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHHostname
	I0920 19:25:47.694186  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.694515  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:47.694542  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.694678  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHPort
	I0920 19:25:47.694904  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:47.695074  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHUsername
	I0920 19:25:47.695212  310233 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/id_rsa Username:docker}
	I0920 19:25:47.784602  310233 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:25:47.788877  310233 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:25:47.788911  310233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:25:47.788985  310233 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:25:47.789060  310233 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:25:47.789157  310233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:25:47.799011  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:25:47.824713  310233 start.go:296] duration metric: took 132.989486ms for postStartSetup
	I0920 19:25:47.824784  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetConfigRaw
	I0920 19:25:47.825551  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetIP
	I0920 19:25:47.828516  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.828890  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:47.828922  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.829197  310233 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/config.json ...
	I0920 19:25:47.829427  310233 start.go:128] duration metric: took 24.311225527s to createHost
	I0920 19:25:47.829455  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHHostname
	I0920 19:25:47.832037  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.832365  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:47.832393  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.832535  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHPort
	I0920 19:25:47.832728  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:47.832895  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:47.833103  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHUsername
	I0920 19:25:47.833317  310233 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:47.833530  310233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0920 19:25:47.833551  310233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:25:47.942673  310233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726860347.919948317
	
	I0920 19:25:47.942699  310233 fix.go:216] guest clock: 1726860347.919948317
	I0920 19:25:47.942707  310233 fix.go:229] Guest: 2024-09-20 19:25:47.919948317 +0000 UTC Remote: 2024-09-20 19:25:47.829440846 +0000 UTC m=+24.432431091 (delta=90.507471ms)
	I0920 19:25:47.942734  310233 fix.go:200] guest clock delta is within tolerance: 90.507471ms
	I0920 19:25:47.942741  310233 start.go:83] releasing machines lock for "newest-cni-398410", held for 24.42467721s
	I0920 19:25:47.942761  310233 main.go:141] libmachine: (newest-cni-398410) Calling .DriverName
	I0920 19:25:47.943046  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetIP
	I0920 19:25:47.945857  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.946308  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:47.946340  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.946476  310233 main.go:141] libmachine: (newest-cni-398410) Calling .DriverName
	I0920 19:25:47.947024  310233 main.go:141] libmachine: (newest-cni-398410) Calling .DriverName
	I0920 19:25:47.947197  310233 main.go:141] libmachine: (newest-cni-398410) Calling .DriverName
	I0920 19:25:47.947285  310233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:25:47.947342  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHHostname
	I0920 19:25:47.947426  310233 ssh_runner.go:195] Run: cat /version.json
	I0920 19:25:47.947443  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHHostname
	I0920 19:25:47.950191  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.950266  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.950598  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:47.950637  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:47.950672  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.950688  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:47.950791  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHPort
	I0920 19:25:47.950939  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHPort
	I0920 19:25:47.951059  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:47.951080  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:25:47.951224  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHUsername
	I0920 19:25:47.951286  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHUsername
	I0920 19:25:47.951368  310233 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/id_rsa Username:docker}
	I0920 19:25:47.951514  310233 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/id_rsa Username:docker}
	I0920 19:25:48.060639  310233 ssh_runner.go:195] Run: systemctl --version
	I0920 19:25:48.066403  310233 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:25:48.231718  310233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:25:48.238022  310233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:25:48.238092  310233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:25:48.253581  310233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:25:48.253606  310233 start.go:495] detecting cgroup driver to use...
	I0920 19:25:48.253666  310233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:25:48.270307  310233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:25:48.284237  310233 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:25:48.284315  310233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:25:48.299007  310233 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:25:48.314630  310233 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:25:48.441781  310233 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:25:48.581292  310233 docker.go:233] disabling docker service ...
	I0920 19:25:48.581355  310233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:25:48.596486  310233 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:25:48.609223  310233 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:25:48.740064  310233 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:25:48.862555  310233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:25:48.876376  310233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:25:48.895262  310233 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:25:48.895338  310233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:48.906155  310233 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:25:48.906233  310233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:48.917945  310233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:48.928756  310233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:48.939428  310233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:25:48.950070  310233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:48.960444  310233 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:48.977525  310233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:48.987855  310233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:25:48.997412  310233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:25:48.997475  310233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:25:49.012150  310233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:25:49.021885  310233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:25:49.130746  310233 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:25:49.223580  310233 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:25:49.223660  310233 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:25:49.228196  310233 start.go:563] Will wait 60s for crictl version
	I0920 19:25:49.228266  310233 ssh_runner.go:195] Run: which crictl
	I0920 19:25:49.231947  310233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:25:49.273395  310233 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:25:49.273480  310233 ssh_runner.go:195] Run: crio --version
	I0920 19:25:49.301227  310233 ssh_runner.go:195] Run: crio --version
	I0920 19:25:49.331276  310233 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:25:49.332694  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetIP
	I0920 19:25:49.335677  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:49.336071  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:25:49.336102  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:49.336377  310233 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 19:25:49.340370  310233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:25:49.354721  310233 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0920 19:25:49.355862  310233 kubeadm.go:883] updating cluster {Name:newest-cni-398410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-398410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:25:49.355982  310233 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:49.356045  310233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:25:49.390430  310233 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:25:49.390512  310233 ssh_runner.go:195] Run: which lz4
	I0920 19:25:49.394434  310233 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:25:49.398716  310233 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:25:49.398752  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 19:25:50.630640  310233 crio.go:462] duration metric: took 1.236234833s to copy over tarball
	I0920 19:25:50.630744  310233 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:25:52.642982  310233 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.012203975s)
	I0920 19:25:52.643011  310233 crio.go:469] duration metric: took 2.012333798s to extract the tarball
	I0920 19:25:52.643018  310233 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:25:52.679772  310233 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:25:52.723848  310233 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:25:52.723870  310233 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:25:52.723881  310233 kubeadm.go:934] updating node { 192.168.39.234 8443 v1.31.1 crio true true} ...
	I0920 19:25:52.724044  310233 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-398410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-398410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:25:52.724137  310233 ssh_runner.go:195] Run: crio config
	I0920 19:25:52.768920  310233 cni.go:84] Creating CNI manager for ""
	I0920 19:25:52.768942  310233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:25:52.768953  310233 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0920 19:25:52.768981  310233 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-398410 NodeName:newest-cni-398410 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:25:52.769108  310233 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-398410"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:25:52.769169  310233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:25:52.782855  310233 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:25:52.782935  310233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:25:52.792580  310233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0920 19:25:52.809527  310233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:25:52.827073  310233 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0920 19:25:52.843612  310233 ssh_runner.go:195] Run: grep 192.168.39.234	control-plane.minikube.internal$ /etc/hosts
	I0920 19:25:52.847701  310233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:25:52.859980  310233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:25:52.982913  310233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:25:52.999720  310233 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410 for IP: 192.168.39.234
	I0920 19:25:52.999748  310233 certs.go:194] generating shared ca certs ...
	I0920 19:25:52.999774  310233 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:25:52.999981  310233 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:25:53.000040  310233 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:25:53.000053  310233 certs.go:256] generating profile certs ...
	I0920 19:25:53.000126  310233 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/client.key
	I0920 19:25:53.000146  310233 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/client.crt with IP's: []
	I0920 19:25:53.174437  310233 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/client.crt ...
	I0920 19:25:53.174468  310233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/client.crt: {Name:mk5359b6c4ebd34a42d3f090cd4dce0a1da3d7d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:25:53.174657  310233 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/client.key ...
	I0920 19:25:53.174669  310233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/client.key: {Name:mkad72380f8819329b29a6ace18f9967c0d3db84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:25:53.174766  310233 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/apiserver.key.65d767be
	I0920 19:25:53.174789  310233 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/apiserver.crt.65d767be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.234]
	I0920 19:25:53.254140  310233 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/apiserver.crt.65d767be ...
	I0920 19:25:53.254202  310233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/apiserver.crt.65d767be: {Name:mkaf26cd8714ba78fe0d3d57f040bd6049135a6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:25:53.254376  310233 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/apiserver.key.65d767be ...
	I0920 19:25:53.254393  310233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/apiserver.key.65d767be: {Name:mk83aa8b4c7fd6090a55679ced38c0cc54ad36ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:25:53.254461  310233 certs.go:381] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/apiserver.crt.65d767be -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/apiserver.crt
	I0920 19:25:53.254541  310233 certs.go:385] copying /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/apiserver.key.65d767be -> /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/apiserver.key
	I0920 19:25:53.254592  310233 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/proxy-client.key
	I0920 19:25:53.254611  310233 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/proxy-client.crt with IP's: []
	I0920 19:25:53.500905  310233 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/proxy-client.crt ...
	I0920 19:25:53.500941  310233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/proxy-client.crt: {Name:mkadb3d74160878d53ef49437483060c02498a19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:25:53.501127  310233 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/proxy-client.key ...
	I0920 19:25:53.501141  310233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/proxy-client.key: {Name:mk41a48fb68aa44017588188ba6529efef4149a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:25:53.501319  310233 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:25:53.501358  310233 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:25:53.501370  310233 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:25:53.501394  310233 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:25:53.501417  310233 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:25:53.501438  310233 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:25:53.501474  310233 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:25:53.502074  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:25:53.527721  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:25:53.553006  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:25:53.576885  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:25:53.603940  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 19:25:53.630638  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:25:53.656004  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:25:53.681594  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:25:53.708640  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:25:53.734575  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:25:53.758748  310233 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:25:53.790089  310233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:25:53.809774  310233 ssh_runner.go:195] Run: openssl version
	I0920 19:25:53.815738  310233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:25:53.828531  310233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:25:53.833943  310233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:25:53.834010  310233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:25:53.841118  310233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:25:53.857932  310233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:25:53.869346  310233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:25:53.873735  310233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:25:53.873791  310233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:25:53.879406  310233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:25:53.889709  310233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:25:53.899790  310233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:25:53.903923  310233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:25:53.903999  310233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:25:53.909550  310233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:25:53.919743  310233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:25:53.923982  310233 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 19:25:53.924053  310233 kubeadm.go:392] StartCluster: {Name:newest-cni-398410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-398410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:25:53.924153  310233 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:25:53.924199  310233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:25:53.967124  310233 cri.go:89] found id: ""
	I0920 19:25:53.967216  310233 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:25:53.977032  310233 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:25:53.986340  310233 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:25:53.996311  310233 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:25:53.996331  310233 kubeadm.go:157] found existing configuration files:
	
	I0920 19:25:53.996377  310233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:25:54.006226  310233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:25:54.006281  310233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:25:54.016169  310233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:25:54.026136  310233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:25:54.026197  310233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:25:54.036088  310233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:25:54.045388  310233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:25:54.045481  310233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:25:54.056188  310233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:25:54.064916  310233 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:25:54.064986  310233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:25:54.073733  310233 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:25:54.175272  310233 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:25:54.175407  310233 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:25:54.277705  310233 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:25:54.277831  310233 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:25:54.277961  310233 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:25:54.290666  310233 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:25:54.293370  310233 out.go:235]   - Generating certificates and keys ...
	I0920 19:25:54.293490  310233 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:25:54.293571  310233 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:25:54.421129  310233 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 19:25:54.505243  310233 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 19:25:54.799440  310233 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 19:25:54.943134  310233 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 19:25:55.075234  310233 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 19:25:55.075415  310233 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-398410] and IPs [192.168.39.234 127.0.0.1 ::1]
	I0920 19:25:55.221217  310233 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 19:25:55.221416  310233 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-398410] and IPs [192.168.39.234 127.0.0.1 ::1]
	I0920 19:25:55.414072  310233 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 19:25:55.695277  310233 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 19:25:55.904204  310233 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 19:25:55.904310  310233 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:25:55.967669  310233 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:25:56.293171  310233 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:25:56.405190  310233 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:25:56.530756  310233 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:25:56.714443  310233 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:25:56.715148  310233 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:25:56.718528  310233 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:25:56.720945  310233 out.go:235]   - Booting up control plane ...
	I0920 19:25:56.721051  310233 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:25:56.721161  310233 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:25:56.721765  310233 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:25:56.738503  310233 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:25:56.745386  310233 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:25:56.745449  310233 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:25:56.883983  310233 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:25:56.884176  310233 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:25:57.384408  310233 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.000291ms
	I0920 19:25:57.384492  310233 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:26:02.383323  310233 kubeadm.go:310] [api-check] The API server is healthy after 5.001697285s
	I0920 19:26:02.400212  310233 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:26:02.421664  310233 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:26:02.475260  310233 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:26:02.475510  310233 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-398410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:26:02.495581  310233 kubeadm.go:310] [bootstrap-token] Using token: 7zj6rd.bfcwtauwfvyyf4it
	I0920 19:26:02.497405  310233 out.go:235]   - Configuring RBAC rules ...
	I0920 19:26:02.497562  310233 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:26:02.506218  310233 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:26:02.530616  310233 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:26:02.542028  310233 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:26:02.547930  310233 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:26:02.552284  310233 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:26:02.790626  310233 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:26:03.237255  310233 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:26:03.791274  310233 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:26:03.791307  310233 kubeadm.go:310] 
	I0920 19:26:03.791378  310233 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:26:03.791387  310233 kubeadm.go:310] 
	I0920 19:26:03.791481  310233 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:26:03.791493  310233 kubeadm.go:310] 
	I0920 19:26:03.791528  310233 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:26:03.791608  310233 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:26:03.791698  310233 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:26:03.791709  310233 kubeadm.go:310] 
	I0920 19:26:03.791765  310233 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:26:03.791772  310233 kubeadm.go:310] 
	I0920 19:26:03.791811  310233 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:26:03.791817  310233 kubeadm.go:310] 
	I0920 19:26:03.791890  310233 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:26:03.791981  310233 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:26:03.792061  310233 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:26:03.792074  310233 kubeadm.go:310] 
	I0920 19:26:03.792217  310233 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:26:03.792354  310233 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:26:03.792367  310233 kubeadm.go:310] 
	I0920 19:26:03.792480  310233 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7zj6rd.bfcwtauwfvyyf4it \
	I0920 19:26:03.792629  310233 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 19:26:03.792668  310233 kubeadm.go:310] 	--control-plane 
	I0920 19:26:03.792679  310233 kubeadm.go:310] 
	I0920 19:26:03.792808  310233 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:26:03.792830  310233 kubeadm.go:310] 
	I0920 19:26:03.792952  310233 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7zj6rd.bfcwtauwfvyyf4it \
	I0920 19:26:03.793102  310233 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 19:26:03.793516  310233 kubeadm.go:310] W0920 19:25:54.155931     827 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:26:03.793931  310233 kubeadm.go:310] W0920 19:25:54.156953     827 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:26:03.794064  310233 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:26:03.794099  310233 cni.go:84] Creating CNI manager for ""
	I0920 19:26:03.794113  310233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:26:03.796483  310233 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:26:03.798371  310233 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:26:03.811029  310233 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:26:03.832543  310233 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:26:03.832637  310233 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:03.832737  310233 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-398410 minikube.k8s.io/updated_at=2024_09_20T19_26_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=newest-cni-398410 minikube.k8s.io/primary=true
	I0920 19:26:04.043156  310233 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:04.043198  310233 ops.go:34] apiserver oom_adj: -16
	I0920 19:26:04.544036  310233 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:05.043741  310233 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:05.543434  310233 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:06.044218  310233 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:06.544161  310233 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:07.043352  310233 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:07.543730  310233 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:08.043838  310233 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:08.157001  310233 kubeadm.go:1113] duration metric: took 4.324429218s to wait for elevateKubeSystemPrivileges
	I0920 19:26:08.157045  310233 kubeadm.go:394] duration metric: took 14.232998451s to StartCluster
	I0920 19:26:08.157079  310233 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:08.157194  310233 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:26:08.159590  310233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:08.159897  310233 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:26:08.159948  310233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 19:26:08.159999  310233 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:26:08.160093  310233 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-398410"
	I0920 19:26:08.160128  310233 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-398410"
	I0920 19:26:08.160102  310233 addons.go:69] Setting default-storageclass=true in profile "newest-cni-398410"
	I0920 19:26:08.160178  310233 host.go:66] Checking if "newest-cni-398410" exists ...
	I0920 19:26:08.160198  310233 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-398410"
	I0920 19:26:08.160207  310233 config.go:182] Loaded profile config "newest-cni-398410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:26:08.160663  310233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:26:08.160682  310233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:26:08.160709  310233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:26:08.160719  310233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:26:08.161491  310233 out.go:177] * Verifying Kubernetes components...
	I0920 19:26:08.163080  310233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:26:08.178035  310233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
	I0920 19:26:08.178123  310233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43139
	I0920 19:26:08.178631  310233 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:26:08.178647  310233 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:26:08.179189  310233 main.go:141] libmachine: Using API Version  1
	I0920 19:26:08.179293  310233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:26:08.179342  310233 main.go:141] libmachine: Using API Version  1
	I0920 19:26:08.179371  310233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:26:08.179686  310233 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:26:08.179776  310233 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:26:08.180258  310233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:26:08.180312  310233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:26:08.180700  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetState
	I0920 19:26:08.185364  310233 addons.go:234] Setting addon default-storageclass=true in "newest-cni-398410"
	I0920 19:26:08.185414  310233 host.go:66] Checking if "newest-cni-398410" exists ...
	I0920 19:26:08.185885  310233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:26:08.185968  310233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:26:08.197623  310233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33193
	I0920 19:26:08.198265  310233 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:26:08.198882  310233 main.go:141] libmachine: Using API Version  1
	I0920 19:26:08.198908  310233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:26:08.199252  310233 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:26:08.199449  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetState
	I0920 19:26:08.201522  310233 main.go:141] libmachine: (newest-cni-398410) Calling .DriverName
	I0920 19:26:08.202163  310233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
	I0920 19:26:08.202653  310233 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:26:08.203218  310233 main.go:141] libmachine: Using API Version  1
	I0920 19:26:08.203246  310233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:26:08.203472  310233 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:26:08.203617  310233 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:26:08.204127  310233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:26:08.204173  310233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:26:08.205495  310233 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:26:08.205550  310233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:26:08.205588  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHHostname
	I0920 19:26:08.209172  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:26:08.209661  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:26:08.209827  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:26:08.209917  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHPort
	I0920 19:26:08.210296  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:26:08.210728  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHUsername
	I0920 19:26:08.210999  310233 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/id_rsa Username:docker}
	I0920 19:26:08.221947  310233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0920 19:26:08.222539  310233 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:26:08.223084  310233 main.go:141] libmachine: Using API Version  1
	I0920 19:26:08.223108  310233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:26:08.223525  310233 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:26:08.223741  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetState
	I0920 19:26:08.225532  310233 main.go:141] libmachine: (newest-cni-398410) Calling .DriverName
	I0920 19:26:08.225782  310233 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:26:08.225799  310233 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:26:08.225821  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHHostname
	I0920 19:26:08.228638  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:26:08.229063  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:69:77", ip: ""} in network mk-newest-cni-398410: {Iface:virbr1 ExpiryTime:2024-09-20 20:25:37 +0000 UTC Type:0 Mac:52:54:00:50:69:77 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:newest-cni-398410 Clientid:01:52:54:00:50:69:77}
	I0920 19:26:08.229087  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined IP address 192.168.39.234 and MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:26:08.229218  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHPort
	I0920 19:26:08.229379  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHKeyPath
	I0920 19:26:08.229536  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetSSHUsername
	I0920 19:26:08.229658  310233 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/id_rsa Username:docker}
	I0920 19:26:08.446714  310233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:26:08.446746  310233 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 19:26:08.502227  310233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:26:08.514402  310233 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:26:08.514480  310233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:26:08.626885  310233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:26:09.146210  310233 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 19:26:09.146329  310233 main.go:141] libmachine: Making call to close driver server
	I0920 19:26:09.146356  310233 main.go:141] libmachine: (newest-cni-398410) Calling .Close
	I0920 19:26:09.146349  310233 api_server.go:72] duration metric: took 986.413081ms to wait for apiserver process to appear ...
	I0920 19:26:09.146412  310233 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:26:09.146453  310233 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0920 19:26:09.146688  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Closing plugin on server side
	I0920 19:26:09.146691  310233 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:26:09.146712  310233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:26:09.146720  310233 main.go:141] libmachine: Making call to close driver server
	I0920 19:26:09.146728  310233 main.go:141] libmachine: (newest-cni-398410) Calling .Close
	I0920 19:26:09.146959  310233 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:26:09.146987  310233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:26:09.146991  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Closing plugin on server side
	I0920 19:26:09.163912  310233 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I0920 19:26:09.164897  310233 api_server.go:141] control plane version: v1.31.1
	I0920 19:26:09.164923  310233 api_server.go:131] duration metric: took 18.49678ms to wait for apiserver health ...
	I0920 19:26:09.164932  310233 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:26:09.192427  310233 main.go:141] libmachine: Making call to close driver server
	I0920 19:26:09.192453  310233 main.go:141] libmachine: (newest-cni-398410) Calling .Close
	I0920 19:26:09.192793  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Closing plugin on server side
	I0920 19:26:09.192840  310233 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:26:09.192857  310233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:26:09.196104  310233 system_pods.go:59] 7 kube-system pods found
	I0920 19:26:09.196138  310233 system_pods.go:61] "coredns-7c65d6cfc9-52rk5" [34bf5896-760f-4a8c-bf08-2b3743069d8c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:26:09.196146  310233 system_pods.go:61] "coredns-7c65d6cfc9-h7wt4" [8122f920-c43a-46e0-9ec4-bbc015dbb101] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:26:09.196158  310233 system_pods.go:61] "etcd-newest-cni-398410" [6360efaf-fff2-434d-9c2d-3bb3fd867cb3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:26:09.196166  310233 system_pods.go:61] "kube-apiserver-newest-cni-398410" [a73ca480-da66-489d-bb30-12eb682e1dcc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:26:09.196173  310233 system_pods.go:61] "kube-controller-manager-newest-cni-398410" [3b017875-b32d-4524-96df-b742ae85f09e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:26:09.196181  310233 system_pods.go:61] "kube-proxy-nkgst" [44ecff86-fbba-48f4-abdc-b2ae13dbdc7a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:26:09.196187  310233 system_pods.go:61] "kube-scheduler-newest-cni-398410" [83d36c7a-572e-42e5-ba01-9c2390e91b3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:26:09.196196  310233 system_pods.go:74] duration metric: took 31.257479ms to wait for pod list to return data ...
	I0920 19:26:09.196206  310233 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:26:09.206666  310233 default_sa.go:45] found service account: "default"
	I0920 19:26:09.206693  310233 default_sa.go:55] duration metric: took 10.479263ms for default service account to be created ...
	I0920 19:26:09.206705  310233 kubeadm.go:582] duration metric: took 1.046774311s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0920 19:26:09.206720  310233 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:26:09.226399  310233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:26:09.226429  310233 node_conditions.go:123] node cpu capacity is 2
	I0920 19:26:09.226449  310233 node_conditions.go:105] duration metric: took 19.725438ms to run NodePressure ...
	I0920 19:26:09.226462  310233 start.go:241] waiting for startup goroutines ...
	I0920 19:26:09.655727  310233 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-398410" context rescaled to 1 replicas
	I0920 19:26:09.665770  310233 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.03884204s)
	I0920 19:26:09.665851  310233 main.go:141] libmachine: Making call to close driver server
	I0920 19:26:09.665866  310233 main.go:141] libmachine: (newest-cni-398410) Calling .Close
	I0920 19:26:09.666216  310233 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:26:09.666259  310233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:26:09.666274  310233 main.go:141] libmachine: Making call to close driver server
	I0920 19:26:09.666284  310233 main.go:141] libmachine: (newest-cni-398410) Calling .Close
	I0920 19:26:09.668016  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Closing plugin on server side
	I0920 19:26:09.668023  310233 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:26:09.668060  310233 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:26:09.670136  310233 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0920 19:26:09.671428  310233 addons.go:510] duration metric: took 1.511430953s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0920 19:26:09.671484  310233 start.go:246] waiting for cluster config update ...
	I0920 19:26:09.671503  310233 start.go:255] writing updated cluster config ...
	I0920 19:26:09.671849  310233 ssh_runner.go:195] Run: rm -f paused
	I0920 19:26:09.735027  310233 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:26:09.736817  310233 out.go:177] * Done! kubectl is now configured to use "newest-cni-398410" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.268261315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860380268235955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f69ed15c-3ce9-4eed-8dc2-595a79e7de54 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.268880245Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c597e106-e9cd-4f1a-8ff8-e52d9835419d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.268982488Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c597e106-e9cd-4f1a-8ff8-e52d9835419d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.269228046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2723b5d731b534edb71c51be36dec0081571147745dfe4442c1ee88181556806,PodSandboxId:d2208cdae4af7ced77ea12dbd1e3947a0da5374812086aa97b51738d6b35e3df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859392193511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bcc482a-6905-436a-8d90-7eee9ba18f8b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fd0790b5de170f5af7f48ee38678a47bf873c54e6758b49e81ce86fbe9611,PodSandboxId:8ac9258f316c5409c0553943663ab92d1b23c2be46e527c2a3eae7f1c2acc8c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391469683495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a441e8-39b0-4623-a7bd-eebbd1574f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4018f260defa1395c4657499327bc4925863a9f2b0cb4ca64a1fff4603ffe1f,PodSandboxId:24021bdf4ca6509fd90d9b58cb2b724be60be9a1ffb3285df8076ca2168b7feb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391039092866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2zlww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
eb78763-7160-4ae9-80c3-87a82a6dc992,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c364c544d7d7f031ff432964306d762430675bdcc271d3bab8e950e2a8f7fc28,PodSandboxId:e7ef26e4c8f0c653bd9881855e65395bf79dcaf6fdaddeae4dc5e946c60146a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726859390740091257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-whcbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2dbb60-1a51-4874-98b8-75d1a35b0512,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ccd1fc6b8f8df2809889e166b96a1b6c2f3430bc180ebf5a173b2821961e388,PodSandboxId:6d28730d9d07ce3c7619c35ffb691940a085f62bcb611da954fd68df457e9c5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859379711547432,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84766edb8659eb295c2d46988cdb09d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf43e654caeb5f98cdf5e1da55898b1b9522078243dc9fc219566b5edb16a0f8,PodSandboxId:089d0106a913c501ade84880df8565badec9254b77b75dde21d58d61626d8eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859379751370179,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509a6ad1e89e1bc816872362edf1d642,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cf7beb540ba5cb3ff61d3baf08eafacc28e1e1c95c33cb69a1032e335f1ed53,PodSandboxId:b29721bb4db6b41c0e4c1131aa8113f2e97ae8410a53507d8263ca02504b281f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859379726914209,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb06bd75ec7169fc5a461bfe6e3646d39d5a4ee55e2ccaa859a7ed4a2d4a2f0,PodSandboxId:b6cdc4bbe6592859f144838f4dcf92504c58b9608f3a27516c156df6414d15d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859379680697813,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c4226e1838e7a3ea47eacc9d8a2390,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aca960651b40b7a4edbec39fb2f9680b94b5bf9052d8e236bcb33f39f501413,PodSandboxId:662adc8c49ec5df33d030dc501e3b781286be011fd95b47104498c6591e70ef1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859092343694002,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c597e106-e9cd-4f1a-8ff8-e52d9835419d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.314453957Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=934f87fc-0cb2-4bcd-bfde-f09365198269 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.314525878Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=934f87fc-0cb2-4bcd-bfde-f09365198269 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.316308914Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc985615-3a9b-4d86-8033-dd03348cceec name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.316747892Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860380316722773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc985615-3a9b-4d86-8033-dd03348cceec name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.317582406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c5c85a7-1020-4ad9-8f7a-692b6487275d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.317638284Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c5c85a7-1020-4ad9-8f7a-692b6487275d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.317941435Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2723b5d731b534edb71c51be36dec0081571147745dfe4442c1ee88181556806,PodSandboxId:d2208cdae4af7ced77ea12dbd1e3947a0da5374812086aa97b51738d6b35e3df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859392193511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bcc482a-6905-436a-8d90-7eee9ba18f8b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fd0790b5de170f5af7f48ee38678a47bf873c54e6758b49e81ce86fbe9611,PodSandboxId:8ac9258f316c5409c0553943663ab92d1b23c2be46e527c2a3eae7f1c2acc8c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391469683495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a441e8-39b0-4623-a7bd-eebbd1574f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4018f260defa1395c4657499327bc4925863a9f2b0cb4ca64a1fff4603ffe1f,PodSandboxId:24021bdf4ca6509fd90d9b58cb2b724be60be9a1ffb3285df8076ca2168b7feb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391039092866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2zlww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
eb78763-7160-4ae9-80c3-87a82a6dc992,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c364c544d7d7f031ff432964306d762430675bdcc271d3bab8e950e2a8f7fc28,PodSandboxId:e7ef26e4c8f0c653bd9881855e65395bf79dcaf6fdaddeae4dc5e946c60146a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726859390740091257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-whcbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2dbb60-1a51-4874-98b8-75d1a35b0512,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ccd1fc6b8f8df2809889e166b96a1b6c2f3430bc180ebf5a173b2821961e388,PodSandboxId:6d28730d9d07ce3c7619c35ffb691940a085f62bcb611da954fd68df457e9c5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859379711547432,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84766edb8659eb295c2d46988cdb09d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf43e654caeb5f98cdf5e1da55898b1b9522078243dc9fc219566b5edb16a0f8,PodSandboxId:089d0106a913c501ade84880df8565badec9254b77b75dde21d58d61626d8eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859379751370179,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509a6ad1e89e1bc816872362edf1d642,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cf7beb540ba5cb3ff61d3baf08eafacc28e1e1c95c33cb69a1032e335f1ed53,PodSandboxId:b29721bb4db6b41c0e4c1131aa8113f2e97ae8410a53507d8263ca02504b281f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859379726914209,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb06bd75ec7169fc5a461bfe6e3646d39d5a4ee55e2ccaa859a7ed4a2d4a2f0,PodSandboxId:b6cdc4bbe6592859f144838f4dcf92504c58b9608f3a27516c156df6414d15d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859379680697813,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c4226e1838e7a3ea47eacc9d8a2390,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aca960651b40b7a4edbec39fb2f9680b94b5bf9052d8e236bcb33f39f501413,PodSandboxId:662adc8c49ec5df33d030dc501e3b781286be011fd95b47104498c6591e70ef1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859092343694002,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c5c85a7-1020-4ad9-8f7a-692b6487275d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.360621646Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db38280e-6d59-468c-ab97-cfe6cf63a238 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.360701293Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db38280e-6d59-468c-ab97-cfe6cf63a238 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.362051263Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd77384e-955e-4e77-ae89-1ed56e2b8159 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.362503761Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860380362477132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd77384e-955e-4e77-ae89-1ed56e2b8159 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.363154591Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f1aae2c-bffc-4c6c-a49b-479716d58d8c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.363212934Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f1aae2c-bffc-4c6c-a49b-479716d58d8c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.363437926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2723b5d731b534edb71c51be36dec0081571147745dfe4442c1ee88181556806,PodSandboxId:d2208cdae4af7ced77ea12dbd1e3947a0da5374812086aa97b51738d6b35e3df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859392193511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bcc482a-6905-436a-8d90-7eee9ba18f8b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fd0790b5de170f5af7f48ee38678a47bf873c54e6758b49e81ce86fbe9611,PodSandboxId:8ac9258f316c5409c0553943663ab92d1b23c2be46e527c2a3eae7f1c2acc8c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391469683495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a441e8-39b0-4623-a7bd-eebbd1574f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4018f260defa1395c4657499327bc4925863a9f2b0cb4ca64a1fff4603ffe1f,PodSandboxId:24021bdf4ca6509fd90d9b58cb2b724be60be9a1ffb3285df8076ca2168b7feb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391039092866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2zlww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
eb78763-7160-4ae9-80c3-87a82a6dc992,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c364c544d7d7f031ff432964306d762430675bdcc271d3bab8e950e2a8f7fc28,PodSandboxId:e7ef26e4c8f0c653bd9881855e65395bf79dcaf6fdaddeae4dc5e946c60146a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726859390740091257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-whcbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2dbb60-1a51-4874-98b8-75d1a35b0512,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ccd1fc6b8f8df2809889e166b96a1b6c2f3430bc180ebf5a173b2821961e388,PodSandboxId:6d28730d9d07ce3c7619c35ffb691940a085f62bcb611da954fd68df457e9c5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859379711547432,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84766edb8659eb295c2d46988cdb09d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf43e654caeb5f98cdf5e1da55898b1b9522078243dc9fc219566b5edb16a0f8,PodSandboxId:089d0106a913c501ade84880df8565badec9254b77b75dde21d58d61626d8eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859379751370179,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509a6ad1e89e1bc816872362edf1d642,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cf7beb540ba5cb3ff61d3baf08eafacc28e1e1c95c33cb69a1032e335f1ed53,PodSandboxId:b29721bb4db6b41c0e4c1131aa8113f2e97ae8410a53507d8263ca02504b281f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859379726914209,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb06bd75ec7169fc5a461bfe6e3646d39d5a4ee55e2ccaa859a7ed4a2d4a2f0,PodSandboxId:b6cdc4bbe6592859f144838f4dcf92504c58b9608f3a27516c156df6414d15d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859379680697813,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c4226e1838e7a3ea47eacc9d8a2390,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aca960651b40b7a4edbec39fb2f9680b94b5bf9052d8e236bcb33f39f501413,PodSandboxId:662adc8c49ec5df33d030dc501e3b781286be011fd95b47104498c6591e70ef1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859092343694002,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f1aae2c-bffc-4c6c-a49b-479716d58d8c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.399583109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b948020-f123-4115-a961-fb81c57c0657 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.399656469Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b948020-f123-4115-a961-fb81c57c0657 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.401072240Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d084a0e1-955e-4acf-a38d-9ab4a4a23ba1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.401484724Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860380401462586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d084a0e1-955e-4acf-a38d-9ab4a4a23ba1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.402001312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ec894db-7ea5-46e6-8352-ba1d717475aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.402060236Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ec894db-7ea5-46e6-8352-ba1d717475aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:26:20 embed-certs-339897 crio[711]: time="2024-09-20 19:26:20.402258533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2723b5d731b534edb71c51be36dec0081571147745dfe4442c1ee88181556806,PodSandboxId:d2208cdae4af7ced77ea12dbd1e3947a0da5374812086aa97b51738d6b35e3df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859392193511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bcc482a-6905-436a-8d90-7eee9ba18f8b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fd0790b5de170f5af7f48ee38678a47bf873c54e6758b49e81ce86fbe9611,PodSandboxId:8ac9258f316c5409c0553943663ab92d1b23c2be46e527c2a3eae7f1c2acc8c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391469683495,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7fxdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a441e8-39b0-4623-a7bd-eebbd1574f20,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4018f260defa1395c4657499327bc4925863a9f2b0cb4ca64a1fff4603ffe1f,PodSandboxId:24021bdf4ca6509fd90d9b58cb2b724be60be9a1ffb3285df8076ca2168b7feb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859391039092866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2zlww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
eb78763-7160-4ae9-80c3-87a82a6dc992,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c364c544d7d7f031ff432964306d762430675bdcc271d3bab8e950e2a8f7fc28,PodSandboxId:e7ef26e4c8f0c653bd9881855e65395bf79dcaf6fdaddeae4dc5e946c60146a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726859390740091257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-whcbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2dbb60-1a51-4874-98b8-75d1a35b0512,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ccd1fc6b8f8df2809889e166b96a1b6c2f3430bc180ebf5a173b2821961e388,PodSandboxId:6d28730d9d07ce3c7619c35ffb691940a085f62bcb611da954fd68df457e9c5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859379711547432,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84766edb8659eb295c2d46988cdb09d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf43e654caeb5f98cdf5e1da55898b1b9522078243dc9fc219566b5edb16a0f8,PodSandboxId:089d0106a913c501ade84880df8565badec9254b77b75dde21d58d61626d8eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859379751370179,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 509a6ad1e89e1bc816872362edf1d642,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cf7beb540ba5cb3ff61d3baf08eafacc28e1e1c95c33cb69a1032e335f1ed53,PodSandboxId:b29721bb4db6b41c0e4c1131aa8113f2e97ae8410a53507d8263ca02504b281f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859379726914209,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb06bd75ec7169fc5a461bfe6e3646d39d5a4ee55e2ccaa859a7ed4a2d4a2f0,PodSandboxId:b6cdc4bbe6592859f144838f4dcf92504c58b9608f3a27516c156df6414d15d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859379680697813,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c4226e1838e7a3ea47eacc9d8a2390,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aca960651b40b7a4edbec39fb2f9680b94b5bf9052d8e236bcb33f39f501413,PodSandboxId:662adc8c49ec5df33d030dc501e3b781286be011fd95b47104498c6591e70ef1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859092343694002,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-339897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fddbf0260d53ab7d82af8de05368be,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ec894db-7ea5-46e6-8352-ba1d717475aa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2723b5d731b53       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   d2208cdae4af7       storage-provisioner
	9f0fd0790b5de       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   8ac9258f316c5       coredns-7c65d6cfc9-7fxdr
	d4018f260defa       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   24021bdf4ca65       coredns-7c65d6cfc9-2zlww
	c364c544d7d7f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   e7ef26e4c8f0c       kube-proxy-whcbh
	cf43e654caeb5       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 minutes ago      Running             kube-controller-manager   2                   089d0106a913c       kube-controller-manager-embed-certs-339897
	0cf7beb540ba5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Running             kube-apiserver            2                   b29721bb4db6b       kube-apiserver-embed-certs-339897
	5ccd1fc6b8f8d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   6d28730d9d07c       etcd-embed-certs-339897
	2eb06bd75ec71       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 minutes ago      Running             kube-scheduler            2                   b6cdc4bbe6592       kube-scheduler-embed-certs-339897
	9aca960651b40       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   662adc8c49ec5       kube-apiserver-embed-certs-339897
	
	
	==> coredns [9f0fd0790b5de170f5af7f48ee38678a47bf873c54e6758b49e81ce86fbe9611] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d4018f260defa1395c4657499327bc4925863a9f2b0cb4ca64a1fff4603ffe1f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-339897
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-339897
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=embed-certs-339897
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T19_09_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:09:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-339897
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:26:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:25:14 +0000   Fri, 20 Sep 2024 19:09:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:25:14 +0000   Fri, 20 Sep 2024 19:09:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:25:14 +0000   Fri, 20 Sep 2024 19:09:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:25:14 +0000   Fri, 20 Sep 2024 19:09:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.72
	  Hostname:    embed-certs-339897
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 20e655c3886546f3b3a07c10a5b65d8e
	  System UUID:                20e655c3-8865-46f3-b3a0-7c10a5b65d8e
	  Boot ID:                    20c108b9-0be7-4e19-94a8-dadaa6f487ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-2zlww                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-7fxdr                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-339897                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-339897             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-339897    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-whcbh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-339897             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-tw9fh               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node embed-certs-339897 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node embed-certs-339897 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node embed-certs-339897 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node embed-certs-339897 event: Registered Node embed-certs-339897 in Controller
	
	
	==> dmesg <==
	[  +0.051132] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037637] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.778132] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.874331] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.537649] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.183454] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.057271] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053305] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.164608] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.145583] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.293758] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +4.053385] systemd-fstab-generator[793]: Ignoring "noauto" option for root device
	[  +2.012710] systemd-fstab-generator[913]: Ignoring "noauto" option for root device
	[  +0.060624] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.524096] kauditd_printk_skb: 69 callbacks suppressed
	[Sep20 19:05] kauditd_printk_skb: 90 callbacks suppressed
	[Sep20 19:09] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.997588] systemd-fstab-generator[2582]: Ignoring "noauto" option for root device
	[  +4.842166] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.735040] systemd-fstab-generator[2904]: Ignoring "noauto" option for root device
	[  +5.414043] systemd-fstab-generator[3030]: Ignoring "noauto" option for root device
	[  +0.112090] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.168313] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [5ccd1fc6b8f8df2809889e166b96a1b6c2f3430bc180ebf5a173b2821961e388] <==
	{"level":"info","ts":"2024-09-20T19:09:40.327627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69bbf9a7ee633bdb became leader at term 2"}
	{"level":"info","ts":"2024-09-20T19:09:40.327665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 69bbf9a7ee633bdb elected leader 69bbf9a7ee633bdb at term 2"}
	{"level":"info","ts":"2024-09-20T19:09:40.332258Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"69bbf9a7ee633bdb","local-member-attributes":"{Name:embed-certs-339897 ClientURLs:[https://192.168.72.72:2379]}","request-path":"/0/members/69bbf9a7ee633bdb/attributes","cluster-id":"57220485084312a4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T19:09:40.332646Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:09:40.336014Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T19:09:40.336086Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T19:09:40.332665Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:09:40.334014Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:09:40.342084Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"57220485084312a4","local-member-id":"69bbf9a7ee633bdb","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:09:40.342216Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:09:40.342276Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:09:40.344987Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:09:40.345867Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.72:2379"}
	{"level":"info","ts":"2024-09-20T19:09:40.348448Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:09:40.357156Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T19:19:40.690278Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":688}
	{"level":"info","ts":"2024-09-20T19:19:40.700141Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":688,"took":"8.985121ms","hash":57412813,"current-db-size-bytes":2277376,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2277376,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-09-20T19:19:40.700262Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":57412813,"revision":688,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T19:24:40.697981Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":931}
	{"level":"info","ts":"2024-09-20T19:24:40.702670Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":931,"took":"4.321326ms","hash":2428312730,"current-db-size-bytes":2277376,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1576960,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-20T19:24:40.702724Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2428312730,"revision":931,"compact-revision":688}
	{"level":"info","ts":"2024-09-20T19:25:55.961275Z","caller":"traceutil/trace.go:171","msg":"trace[1335103395] linearizableReadLoop","detail":"{readStateIndex:1448; appliedIndex:1447; }","duration":"231.083809ms","start":"2024-09-20T19:25:55.730158Z","end":"2024-09-20T19:25:55.961242Z","steps":["trace[1335103395] 'read index received'  (duration: 230.923859ms)","trace[1335103395] 'applied index is now lower than readState.Index'  (duration: 159.415µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T19:25:55.961619Z","caller":"traceutil/trace.go:171","msg":"trace[1696138206] transaction","detail":"{read_only:false; response_revision:1237; number_of_response:1; }","duration":"289.142211ms","start":"2024-09-20T19:25:55.672460Z","end":"2024-09-20T19:25:55.961602Z","steps":["trace[1696138206] 'process raft request'  (duration: 288.670452ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:25:55.962040Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.733941ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T19:25:55.962119Z","caller":"traceutil/trace.go:171","msg":"trace[384047039] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:1237; }","duration":"231.97197ms","start":"2024-09-20T19:25:55.730135Z","end":"2024-09-20T19:25:55.962107Z","steps":["trace[384047039] 'agreement among raft nodes before linearized reading'  (duration: 231.709649ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:26:20 up 21 min,  0 users,  load average: 0.25, 0.19, 0.17
	Linux embed-certs-339897 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0cf7beb540ba5cb3ff61d3baf08eafacc28e1e1c95c33cb69a1032e335f1ed53] <==
	I0920 19:22:43.313421       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:22:43.313422       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 19:24:42.312536       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:24:42.312873       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0920 19:24:43.314715       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:24:43.314768       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 19:24:43.314955       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:24:43.315090       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 19:24:43.315945       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:24:43.317089       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 19:25:43.316809       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:25:43.316873       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0920 19:25:43.317980       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 19:25:43.318248       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:25:43.318366       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 19:25:43.319551       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [9aca960651b40b7a4edbec39fb2f9680b94b5bf9052d8e236bcb33f39f501413] <==
	W0920 19:09:32.058373       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.116763       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.128470       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.137629       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.139192       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.140615       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.175765       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.187451       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.271434       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.306536       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.324028       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.464831       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.486064       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.489486       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.512273       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.538514       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.556987       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.618785       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.650551       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.679872       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.731762       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.788000       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.827884       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:32.845611       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:09:34.422263       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [cf43e654caeb5f98cdf5e1da55898b1b9522078243dc9fc219566b5edb16a0f8] <==
	E0920 19:21:19.284149       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:21:19.855844       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:21:49.291096       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:21:49.868119       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:22:19.297950       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:22:19.877045       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:22:49.304777       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:22:49.885296       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:23:19.310881       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:23:19.894175       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:23:49.318262       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:23:49.902469       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:24:19.325318       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:24:19.911052       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:24:49.332799       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:24:49.923789       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 19:25:14.523360       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-339897"
	E0920 19:25:19.340586       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:25:19.937436       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:25:49.348304       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:25:49.947787       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 19:26:01.138425       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="358.724µs"
	I0920 19:26:12.134878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="145.385µs"
	E0920 19:26:19.356238       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:26:19.964483       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c364c544d7d7f031ff432964306d762430675bdcc271d3bab8e950e2a8f7fc28] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 19:09:51.538422       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 19:09:51.567025       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.72"]
	E0920 19:09:51.567115       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:09:51.635526       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 19:09:51.635583       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 19:09:51.635615       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:09:51.647611       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:09:51.648461       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:09:51.648854       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:09:51.655384       1 config.go:199] "Starting service config controller"
	I0920 19:09:51.663791       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:09:51.663873       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:09:51.663881       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:09:51.689340       1 config.go:328] "Starting node config controller"
	I0920 19:09:51.696202       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:09:51.767846       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 19:09:51.768004       1 shared_informer.go:320] Caches are synced for service config
	I0920 19:09:51.796521       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2eb06bd75ec7169fc5a461bfe6e3646d39d5a4ee55e2ccaa859a7ed4a2d4a2f0] <==
	W0920 19:09:43.280942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 19:09:43.281083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.291127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 19:09:43.291173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.331527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 19:09:43.331645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.374740       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 19:09:43.374877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.415621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 19:09:43.415723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.451776       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 19:09:43.451928       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 19:09:43.477524       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 19:09:43.477738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.499767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 19:09:43.500002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.527109       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 19:09:43.527239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.635036       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 19:09:43.635085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.648725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 19:09:43.648790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.696301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 19:09:43.696518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 19:09:46.712130       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:25:25 embed-certs-339897 kubelet[2911]: E0920 19:25:25.387778    2911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860325387513433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:25 embed-certs-339897 kubelet[2911]: E0920 19:25:25.387824    2911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860325387513433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:26 embed-certs-339897 kubelet[2911]: E0920 19:25:26.118679    2911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tw9fh" podUID="8366591d-8916-4b9f-be8a-64ddc185f576"
	Sep 20 19:25:35 embed-certs-339897 kubelet[2911]: E0920 19:25:35.389587    2911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860335389217224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:35 embed-certs-339897 kubelet[2911]: E0920 19:25:35.389624    2911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860335389217224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:39 embed-certs-339897 kubelet[2911]: E0920 19:25:39.119501    2911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tw9fh" podUID="8366591d-8916-4b9f-be8a-64ddc185f576"
	Sep 20 19:25:45 embed-certs-339897 kubelet[2911]: E0920 19:25:45.154366    2911 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 19:25:45 embed-certs-339897 kubelet[2911]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 19:25:45 embed-certs-339897 kubelet[2911]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 19:25:45 embed-certs-339897 kubelet[2911]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 19:25:45 embed-certs-339897 kubelet[2911]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 19:25:45 embed-certs-339897 kubelet[2911]: E0920 19:25:45.390563    2911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860345390339746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:45 embed-certs-339897 kubelet[2911]: E0920 19:25:45.390606    2911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860345390339746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:50 embed-certs-339897 kubelet[2911]: E0920 19:25:50.138082    2911 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 20 19:25:50 embed-certs-339897 kubelet[2911]: E0920 19:25:50.138508    2911 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 20 19:25:50 embed-certs-339897 kubelet[2911]: E0920 19:25:50.138833    2911 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqh8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-tw9fh_kube-system(8366591d-8916-4b9f-be8a-64ddc185f576): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 20 19:25:50 embed-certs-339897 kubelet[2911]: E0920 19:25:50.140196    2911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-tw9fh" podUID="8366591d-8916-4b9f-be8a-64ddc185f576"
	Sep 20 19:25:55 embed-certs-339897 kubelet[2911]: E0920 19:25:55.392868    2911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860355392456859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:55 embed-certs-339897 kubelet[2911]: E0920 19:25:55.392970    2911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860355392456859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:26:01 embed-certs-339897 kubelet[2911]: E0920 19:26:01.119232    2911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tw9fh" podUID="8366591d-8916-4b9f-be8a-64ddc185f576"
	Sep 20 19:26:05 embed-certs-339897 kubelet[2911]: E0920 19:26:05.395138    2911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860365394065835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:26:05 embed-certs-339897 kubelet[2911]: E0920 19:26:05.395400    2911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860365394065835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:26:12 embed-certs-339897 kubelet[2911]: E0920 19:26:12.118986    2911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tw9fh" podUID="8366591d-8916-4b9f-be8a-64ddc185f576"
	Sep 20 19:26:15 embed-certs-339897 kubelet[2911]: E0920 19:26:15.396863    2911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860375396402695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:26:15 embed-certs-339897 kubelet[2911]: E0920 19:26:15.397373    2911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860375396402695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [2723b5d731b534edb71c51be36dec0081571147745dfe4442c1ee88181556806] <==
	I0920 19:09:52.411423       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 19:09:52.431962       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 19:09:52.432352       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 19:09:52.452466       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 19:09:52.452685       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-339897_c64a37ba-95c8-4532-96d1-45c90f750de0!
	I0920 19:09:52.454711       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5fbf437-d763-4ef8-97ec-738d2b6a87d2", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-339897_c64a37ba-95c8-4532-96d1-45c90f750de0 became leader
	I0920 19:09:52.552922       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-339897_c64a37ba-95c8-4532-96d1-45c90f750de0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-339897 -n embed-certs-339897
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-339897 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-tw9fh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-339897 describe pod metrics-server-6867b74b74-tw9fh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-339897 describe pod metrics-server-6867b74b74-tw9fh: exit status 1 (72.798829ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-tw9fh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-339897 describe pod metrics-server-6867b74b74-tw9fh: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (438.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (330.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-037711 -n no-preload-037711
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-20 19:25:43.565835858 +0000 UTC m=+6606.226282514
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-037711 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-037711 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.563µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-037711 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-037711 -n no-preload-037711
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-037711 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-037711 logs -n 25: (1.176357416s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo find                            | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo crio                            | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-793540                                      | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-896665 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | disable-driver-mounts-896665                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:57 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-037711             | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-339897            | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-612312  | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-037711                  | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC | 20 Sep 24 19:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-339897                 | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-425599        | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612312       | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-425599             | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| start   | -p newest-cni-398410 --memory=2200 --alsologtostderr   | newest-cni-398410            | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:25:23
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:25:23.434027  310233 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:25:23.434306  310233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:23.434316  310233 out.go:358] Setting ErrFile to fd 2...
	I0920 19:25:23.434321  310233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:23.434513  310233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 19:25:23.435117  310233 out.go:352] Setting JSON to false
	I0920 19:25:23.436154  310233 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11266,"bootTime":1726849057,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:25:23.436256  310233 start.go:139] virtualization: kvm guest
	I0920 19:25:23.438550  310233 out.go:177] * [newest-cni-398410] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:25:23.440224  310233 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 19:25:23.440225  310233 notify.go:220] Checking for updates...
	I0920 19:25:23.443174  310233 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:25:23.444565  310233 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:25:23.446106  310233 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 19:25:23.447412  310233 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:25:23.448801  310233 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:25:23.450682  310233 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:25:23.450774  310233 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:25:23.450857  310233 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:25:23.450957  310233 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:25:23.491820  310233 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 19:25:23.493346  310233 start.go:297] selected driver: kvm2
	I0920 19:25:23.493366  310233 start.go:901] validating driver "kvm2" against <nil>
	I0920 19:25:23.493379  310233 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:25:23.494160  310233 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:25:23.494260  310233 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:25:23.511406  310233 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:25:23.511493  310233 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0920 19:25:23.511565  310233 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0920 19:25:23.511829  310233 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0920 19:25:23.511866  310233 cni.go:84] Creating CNI manager for ""
	I0920 19:25:23.511930  310233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:25:23.511940  310233 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 19:25:23.512005  310233 start.go:340] cluster config:
	{Name:newest-cni-398410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-398410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:25:23.512137  310233 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:25:23.515010  310233 out.go:177] * Starting "newest-cni-398410" primary control-plane node in "newest-cni-398410" cluster
	I0920 19:25:23.517462  310233 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:23.517507  310233 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 19:25:23.517517  310233 cache.go:56] Caching tarball of preloaded images
	I0920 19:25:23.517630  310233 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:25:23.517645  310233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 19:25:23.517761  310233 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/config.json ...
	I0920 19:25:23.517787  310233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/newest-cni-398410/config.json: {Name:mk1b3d753bbd27adfd710d2d761bbc72d5415fd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:25:23.518015  310233 start.go:360] acquireMachinesLock for newest-cni-398410: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:25:23.518056  310233 start.go:364] duration metric: took 22.418µs to acquireMachinesLock for "newest-cni-398410"
	I0920 19:25:23.518080  310233 start.go:93] Provisioning new machine with config: &{Name:newest-cni-398410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-398410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:25:23.518187  310233 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 19:25:23.521147  310233 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 19:25:23.521321  310233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:25:23.521360  310233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:25:23.537831  310233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35297
	I0920 19:25:23.538411  310233 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:25:23.538998  310233 main.go:141] libmachine: Using API Version  1
	I0920 19:25:23.539021  310233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:25:23.539350  310233 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:25:23.539521  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetMachineName
	I0920 19:25:23.539658  310233 main.go:141] libmachine: (newest-cni-398410) Calling .DriverName
	I0920 19:25:23.539788  310233 start.go:159] libmachine.API.Create for "newest-cni-398410" (driver="kvm2")
	I0920 19:25:23.539814  310233 client.go:168] LocalClient.Create starting
	I0920 19:25:23.539843  310233 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem
	I0920 19:25:23.539876  310233 main.go:141] libmachine: Decoding PEM data...
	I0920 19:25:23.539890  310233 main.go:141] libmachine: Parsing certificate...
	I0920 19:25:23.539947  310233 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem
	I0920 19:25:23.539966  310233 main.go:141] libmachine: Decoding PEM data...
	I0920 19:25:23.539977  310233 main.go:141] libmachine: Parsing certificate...
	I0920 19:25:23.539993  310233 main.go:141] libmachine: Running pre-create checks...
	I0920 19:25:23.540002  310233 main.go:141] libmachine: (newest-cni-398410) Calling .PreCreateCheck
	I0920 19:25:23.540343  310233 main.go:141] libmachine: (newest-cni-398410) Calling .GetConfigRaw
	I0920 19:25:23.540721  310233 main.go:141] libmachine: Creating machine...
	I0920 19:25:23.540735  310233 main.go:141] libmachine: (newest-cni-398410) Calling .Create
	I0920 19:25:23.540927  310233 main.go:141] libmachine: (newest-cni-398410) Creating KVM machine...
	I0920 19:25:23.542395  310233 main.go:141] libmachine: (newest-cni-398410) DBG | found existing default KVM network
	I0920 19:25:23.544082  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:23.543911  310257 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000211940}
	I0920 19:25:23.544138  310233 main.go:141] libmachine: (newest-cni-398410) DBG | created network xml: 
	I0920 19:25:23.544159  310233 main.go:141] libmachine: (newest-cni-398410) DBG | <network>
	I0920 19:25:23.544174  310233 main.go:141] libmachine: (newest-cni-398410) DBG |   <name>mk-newest-cni-398410</name>
	I0920 19:25:23.544184  310233 main.go:141] libmachine: (newest-cni-398410) DBG |   <dns enable='no'/>
	I0920 19:25:23.544207  310233 main.go:141] libmachine: (newest-cni-398410) DBG |   
	I0920 19:25:23.544231  310233 main.go:141] libmachine: (newest-cni-398410) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 19:25:23.544246  310233 main.go:141] libmachine: (newest-cni-398410) DBG |     <dhcp>
	I0920 19:25:23.544258  310233 main.go:141] libmachine: (newest-cni-398410) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 19:25:23.544268  310233 main.go:141] libmachine: (newest-cni-398410) DBG |     </dhcp>
	I0920 19:25:23.544282  310233 main.go:141] libmachine: (newest-cni-398410) DBG |   </ip>
	I0920 19:25:23.544294  310233 main.go:141] libmachine: (newest-cni-398410) DBG |   
	I0920 19:25:23.544304  310233 main.go:141] libmachine: (newest-cni-398410) DBG | </network>
	I0920 19:25:23.544315  310233 main.go:141] libmachine: (newest-cni-398410) DBG | 
	I0920 19:25:23.550886  310233 main.go:141] libmachine: (newest-cni-398410) DBG | trying to create private KVM network mk-newest-cni-398410 192.168.39.0/24...
	I0920 19:25:23.636900  310233 main.go:141] libmachine: (newest-cni-398410) DBG | private KVM network mk-newest-cni-398410 192.168.39.0/24 created
	I0920 19:25:23.636940  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:23.636871  310257 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 19:25:23.636952  310233 main.go:141] libmachine: (newest-cni-398410) Setting up store path in /home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410 ...
	I0920 19:25:23.636969  310233 main.go:141] libmachine: (newest-cni-398410) Building disk image from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 19:25:23.637123  310233 main.go:141] libmachine: (newest-cni-398410) Downloading /home/jenkins/minikube-integration/19679-237658/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 19:25:23.935820  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:23.935649  310257 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/id_rsa...
	I0920 19:25:24.049232  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:24.049082  310257 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/newest-cni-398410.rawdisk...
	I0920 19:25:24.049279  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Writing magic tar header
	I0920 19:25:24.049331  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Writing SSH key tar header
	I0920 19:25:24.049358  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:24.049202  310257 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410 ...
	I0920 19:25:24.049375  310233 main.go:141] libmachine: (newest-cni-398410) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410 (perms=drwx------)
	I0920 19:25:24.049396  310233 main.go:141] libmachine: (newest-cni-398410) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube/machines (perms=drwxr-xr-x)
	I0920 19:25:24.049411  310233 main.go:141] libmachine: (newest-cni-398410) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658/.minikube (perms=drwxr-xr-x)
	I0920 19:25:24.049424  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410
	I0920 19:25:24.049440  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube/machines
	I0920 19:25:24.049462  310233 main.go:141] libmachine: (newest-cni-398410) Setting executable bit set on /home/jenkins/minikube-integration/19679-237658 (perms=drwxrwxr-x)
	I0920 19:25:24.049475  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 19:25:24.049489  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19679-237658
	I0920 19:25:24.049501  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 19:25:24.049514  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Checking permissions on dir: /home/jenkins
	I0920 19:25:24.049525  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Checking permissions on dir: /home
	I0920 19:25:24.049537  310233 main.go:141] libmachine: (newest-cni-398410) DBG | Skipping /home - not owner
	I0920 19:25:24.049551  310233 main.go:141] libmachine: (newest-cni-398410) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 19:25:24.049565  310233 main.go:141] libmachine: (newest-cni-398410) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 19:25:24.049576  310233 main.go:141] libmachine: (newest-cni-398410) Creating domain...
	I0920 19:25:24.050737  310233 main.go:141] libmachine: (newest-cni-398410) define libvirt domain using xml: 
	I0920 19:25:24.050763  310233 main.go:141] libmachine: (newest-cni-398410) <domain type='kvm'>
	I0920 19:25:24.050779  310233 main.go:141] libmachine: (newest-cni-398410)   <name>newest-cni-398410</name>
	I0920 19:25:24.050792  310233 main.go:141] libmachine: (newest-cni-398410)   <memory unit='MiB'>2200</memory>
	I0920 19:25:24.050804  310233 main.go:141] libmachine: (newest-cni-398410)   <vcpu>2</vcpu>
	I0920 19:25:24.050808  310233 main.go:141] libmachine: (newest-cni-398410)   <features>
	I0920 19:25:24.050813  310233 main.go:141] libmachine: (newest-cni-398410)     <acpi/>
	I0920 19:25:24.050817  310233 main.go:141] libmachine: (newest-cni-398410)     <apic/>
	I0920 19:25:24.050822  310233 main.go:141] libmachine: (newest-cni-398410)     <pae/>
	I0920 19:25:24.050827  310233 main.go:141] libmachine: (newest-cni-398410)     
	I0920 19:25:24.050835  310233 main.go:141] libmachine: (newest-cni-398410)   </features>
	I0920 19:25:24.050839  310233 main.go:141] libmachine: (newest-cni-398410)   <cpu mode='host-passthrough'>
	I0920 19:25:24.050846  310233 main.go:141] libmachine: (newest-cni-398410)   
	I0920 19:25:24.050850  310233 main.go:141] libmachine: (newest-cni-398410)   </cpu>
	I0920 19:25:24.050859  310233 main.go:141] libmachine: (newest-cni-398410)   <os>
	I0920 19:25:24.050870  310233 main.go:141] libmachine: (newest-cni-398410)     <type>hvm</type>
	I0920 19:25:24.050882  310233 main.go:141] libmachine: (newest-cni-398410)     <boot dev='cdrom'/>
	I0920 19:25:24.050889  310233 main.go:141] libmachine: (newest-cni-398410)     <boot dev='hd'/>
	I0920 19:25:24.050901  310233 main.go:141] libmachine: (newest-cni-398410)     <bootmenu enable='no'/>
	I0920 19:25:24.050906  310233 main.go:141] libmachine: (newest-cni-398410)   </os>
	I0920 19:25:24.050911  310233 main.go:141] libmachine: (newest-cni-398410)   <devices>
	I0920 19:25:24.050921  310233 main.go:141] libmachine: (newest-cni-398410)     <disk type='file' device='cdrom'>
	I0920 19:25:24.050936  310233 main.go:141] libmachine: (newest-cni-398410)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/boot2docker.iso'/>
	I0920 19:25:24.050952  310233 main.go:141] libmachine: (newest-cni-398410)       <target dev='hdc' bus='scsi'/>
	I0920 19:25:24.050963  310233 main.go:141] libmachine: (newest-cni-398410)       <readonly/>
	I0920 19:25:24.050969  310233 main.go:141] libmachine: (newest-cni-398410)     </disk>
	I0920 19:25:24.050980  310233 main.go:141] libmachine: (newest-cni-398410)     <disk type='file' device='disk'>
	I0920 19:25:24.050988  310233 main.go:141] libmachine: (newest-cni-398410)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 19:25:24.051003  310233 main.go:141] libmachine: (newest-cni-398410)       <source file='/home/jenkins/minikube-integration/19679-237658/.minikube/machines/newest-cni-398410/newest-cni-398410.rawdisk'/>
	I0920 19:25:24.051014  310233 main.go:141] libmachine: (newest-cni-398410)       <target dev='hda' bus='virtio'/>
	I0920 19:25:24.051039  310233 main.go:141] libmachine: (newest-cni-398410)     </disk>
	I0920 19:25:24.051060  310233 main.go:141] libmachine: (newest-cni-398410)     <interface type='network'>
	I0920 19:25:24.051072  310233 main.go:141] libmachine: (newest-cni-398410)       <source network='mk-newest-cni-398410'/>
	I0920 19:25:24.051082  310233 main.go:141] libmachine: (newest-cni-398410)       <model type='virtio'/>
	I0920 19:25:24.051091  310233 main.go:141] libmachine: (newest-cni-398410)     </interface>
	I0920 19:25:24.051102  310233 main.go:141] libmachine: (newest-cni-398410)     <interface type='network'>
	I0920 19:25:24.051113  310233 main.go:141] libmachine: (newest-cni-398410)       <source network='default'/>
	I0920 19:25:24.051127  310233 main.go:141] libmachine: (newest-cni-398410)       <model type='virtio'/>
	I0920 19:25:24.051153  310233 main.go:141] libmachine: (newest-cni-398410)     </interface>
	I0920 19:25:24.051171  310233 main.go:141] libmachine: (newest-cni-398410)     <serial type='pty'>
	I0920 19:25:24.051178  310233 main.go:141] libmachine: (newest-cni-398410)       <target port='0'/>
	I0920 19:25:24.051184  310233 main.go:141] libmachine: (newest-cni-398410)     </serial>
	I0920 19:25:24.051191  310233 main.go:141] libmachine: (newest-cni-398410)     <console type='pty'>
	I0920 19:25:24.051201  310233 main.go:141] libmachine: (newest-cni-398410)       <target type='serial' port='0'/>
	I0920 19:25:24.051214  310233 main.go:141] libmachine: (newest-cni-398410)     </console>
	I0920 19:25:24.051228  310233 main.go:141] libmachine: (newest-cni-398410)     <rng model='virtio'>
	I0920 19:25:24.051249  310233 main.go:141] libmachine: (newest-cni-398410)       <backend model='random'>/dev/random</backend>
	I0920 19:25:24.051261  310233 main.go:141] libmachine: (newest-cni-398410)     </rng>
	I0920 19:25:24.051272  310233 main.go:141] libmachine: (newest-cni-398410)     
	I0920 19:25:24.051285  310233 main.go:141] libmachine: (newest-cni-398410)     
	I0920 19:25:24.051295  310233 main.go:141] libmachine: (newest-cni-398410)   </devices>
	I0920 19:25:24.051304  310233 main.go:141] libmachine: (newest-cni-398410) </domain>
	I0920 19:25:24.051316  310233 main.go:141] libmachine: (newest-cni-398410) 
	I0920 19:25:24.056742  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:e4:05:7d in network default
	I0920 19:25:24.057305  310233 main.go:141] libmachine: (newest-cni-398410) Ensuring networks are active...
	I0920 19:25:24.057334  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:24.058166  310233 main.go:141] libmachine: (newest-cni-398410) Ensuring network default is active
	I0920 19:25:24.058433  310233 main.go:141] libmachine: (newest-cni-398410) Ensuring network mk-newest-cni-398410 is active
	I0920 19:25:24.059012  310233 main.go:141] libmachine: (newest-cni-398410) Getting domain xml...
	I0920 19:25:24.059921  310233 main.go:141] libmachine: (newest-cni-398410) Creating domain...
	I0920 19:25:25.358942  310233 main.go:141] libmachine: (newest-cni-398410) Waiting to get IP...
	I0920 19:25:25.361061  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:25.361575  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:25.361622  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:25.361538  310257 retry.go:31] will retry after 262.08471ms: waiting for machine to come up
	I0920 19:25:25.625209  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:25.625806  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:25.625834  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:25.625747  310257 retry.go:31] will retry after 389.923077ms: waiting for machine to come up
	I0920 19:25:26.017408  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:26.018045  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:26.018077  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:26.017993  310257 retry.go:31] will retry after 473.133715ms: waiting for machine to come up
	I0920 19:25:26.492258  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:26.492794  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:26.492815  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:26.492752  310257 retry.go:31] will retry after 524.383369ms: waiting for machine to come up
	I0920 19:25:27.018420  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:27.019712  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:27.019739  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:27.019645  310257 retry.go:31] will retry after 504.825618ms: waiting for machine to come up
	I0920 19:25:27.526456  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:27.526998  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:27.527023  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:27.526947  310257 retry.go:31] will retry after 918.804995ms: waiting for machine to come up
	I0920 19:25:28.447039  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:28.447667  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:28.447699  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:28.447618  310257 retry.go:31] will retry after 1.099101438s: waiting for machine to come up
	I0920 19:25:29.548392  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:29.548923  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:29.548954  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:29.548864  310257 retry.go:31] will retry after 1.050526325s: waiting for machine to come up
	I0920 19:25:30.600977  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:30.601540  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:30.601576  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:30.601488  310257 retry.go:31] will retry after 1.698668339s: waiting for machine to come up
	I0920 19:25:32.301339  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:32.301764  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:32.301796  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:32.301708  310257 retry.go:31] will retry after 2.040739428s: waiting for machine to come up
	I0920 19:25:34.344054  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:34.344544  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:34.344568  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:34.344515  310257 retry.go:31] will retry after 2.113534621s: waiting for machine to come up
	I0920 19:25:36.460218  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:36.460672  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:36.460699  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:36.460617  310257 retry.go:31] will retry after 2.700285138s: waiting for machine to come up
	I0920 19:25:39.163142  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:39.163728  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:39.163757  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:39.163673  310257 retry.go:31] will retry after 2.894153028s: waiting for machine to come up
	I0920 19:25:42.059269  310233 main.go:141] libmachine: (newest-cni-398410) DBG | domain newest-cni-398410 has defined MAC address 52:54:00:50:69:77 in network mk-newest-cni-398410
	I0920 19:25:42.059790  310233 main.go:141] libmachine: (newest-cni-398410) DBG | unable to find current IP address of domain newest-cni-398410 in network mk-newest-cni-398410
	I0920 19:25:42.059822  310233 main.go:141] libmachine: (newest-cni-398410) DBG | I0920 19:25:42.059734  310257 retry.go:31] will retry after 4.180679222s: waiting for machine to come up
	
	
	==> CRI-O <==
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.238181805Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac516916-92e6-42b2-8310-8ad741f73a96 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.239309689Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df0da0c5-3f33-4fbe-9b4f-312ed04cd89c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.239684956Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860344239657566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df0da0c5-3f33-4fbe-9b4f-312ed04cd89c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.240234945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83c05789-8405-4b80-b970-ef2d70ec6f1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.240301905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83c05789-8405-4b80-b970-ef2d70ec6f1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.240511412Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c3b2c73c79f03835b2d7f9bbcac7a9e7daab8399d4394c4f5e57edfe00b04ca,PodSandboxId:5c5864ca73b60ef9f89df1ef0451bc957e0ebdb49178a80c23078b851315309a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859462020503944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f05c0a-c6be-4e68-959e-966c17c9cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:490f881a4145983bd5f22b576485c4167b96e3d03a669a764d2b77d254dbf8c9,PodSandboxId:8e0201363b51353b38f798de986fb30d95024019a1f71029376b7641cdfb392f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461924533095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h84nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ada3ba7-1ccd-474b-850b-c00a77dfbb92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3ba4d23673d8beab73a909b9507fce9cc7b80319a2ae48cd1cfd7ea08e5886,PodSandboxId:8984188f2284180601ab0bef07fd03d01c0f41c8e650d91a06bd15bf331625af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461886672209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdfh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61
c6d6d8-62b9-4db3-a3c3-fd0daec82a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc3cf8747bd605b197ec515407d007bf6797f30c00b192fc7c04f5b68554df6,PodSandboxId:bc2edec3c4385ecac9ef4cfb1f527fe8422d93c09aaa0077df38f25faec360ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726859461267199152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvfqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2170ef3f-58f0-4d42-9f15-d9c952e0e2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa21f43834f66104e826a833971dede5d642811a29cb6fa3b34b5bfe378a890,PodSandboxId:766829d60dae13109dc74c598c065a06cac4eb91829b41d96c478098bd304244,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859450705134369,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c3d500e11904aa0df64b9be940c73c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782f8908af7306fd228aadd0d307c4db0a42502fea560906311314ac5c6e0b68,PodSandboxId:c5c7ac8990434e3448492ac229767311a5f3b425b397903688ae3cf776db9afd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859450661715750,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad8cc73a7e9abe289ec90a38b985562,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3e6413ae85d5fba0d2fef8822758664194d4f1df781937591a684aabe9ec9c,PodSandboxId:299cb322be009f759f2ffc3b60c7f9b4281f8c034303ae8d9c7c1c9cfd692ed2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859450708544893,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72cb8cb71b12a9dd016202c6ee7de79a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f5bf350558c57eadea4484da5b43d0633125ef5339961e8a0b179f0f4d660f,PodSandboxId:83fb186f2e25d7f973e0915b3c020ba520ff9f155b0c2c61a74864f9e1b44992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859450618363481,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c427dc7b4fa266356a38b8def1c1cce91a76dd31495176c88816cd1310deed,PodSandboxId:b3bbf11fb11f2b91152f3dbaa44973c84519501dfdd75f2a6a5157de8b4232c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859163605757361,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83c05789-8405-4b80-b970-ef2d70ec6f1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.278055959Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10a400ee-961b-4625-8134-c7b878fd6bee name=/runtime.v1.RuntimeService/Version
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.278143530Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10a400ee-961b-4625-8134-c7b878fd6bee name=/runtime.v1.RuntimeService/Version
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.279460779Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69431a20-7471-43d6-a414-cda9f3491724 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.279973673Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860344279944823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69431a20-7471-43d6-a414-cda9f3491724 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.280428048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56c8ef4f-e977-4ab9-9bf7-fe0b1461cbfa name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.280602343Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56c8ef4f-e977-4ab9-9bf7-fe0b1461cbfa name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.280898590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c3b2c73c79f03835b2d7f9bbcac7a9e7daab8399d4394c4f5e57edfe00b04ca,PodSandboxId:5c5864ca73b60ef9f89df1ef0451bc957e0ebdb49178a80c23078b851315309a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859462020503944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f05c0a-c6be-4e68-959e-966c17c9cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:490f881a4145983bd5f22b576485c4167b96e3d03a669a764d2b77d254dbf8c9,PodSandboxId:8e0201363b51353b38f798de986fb30d95024019a1f71029376b7641cdfb392f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461924533095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h84nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ada3ba7-1ccd-474b-850b-c00a77dfbb92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3ba4d23673d8beab73a909b9507fce9cc7b80319a2ae48cd1cfd7ea08e5886,PodSandboxId:8984188f2284180601ab0bef07fd03d01c0f41c8e650d91a06bd15bf331625af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461886672209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdfh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61
c6d6d8-62b9-4db3-a3c3-fd0daec82a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc3cf8747bd605b197ec515407d007bf6797f30c00b192fc7c04f5b68554df6,PodSandboxId:bc2edec3c4385ecac9ef4cfb1f527fe8422d93c09aaa0077df38f25faec360ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726859461267199152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvfqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2170ef3f-58f0-4d42-9f15-d9c952e0e2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa21f43834f66104e826a833971dede5d642811a29cb6fa3b34b5bfe378a890,PodSandboxId:766829d60dae13109dc74c598c065a06cac4eb91829b41d96c478098bd304244,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859450705134369,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c3d500e11904aa0df64b9be940c73c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782f8908af7306fd228aadd0d307c4db0a42502fea560906311314ac5c6e0b68,PodSandboxId:c5c7ac8990434e3448492ac229767311a5f3b425b397903688ae3cf776db9afd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859450661715750,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad8cc73a7e9abe289ec90a38b985562,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3e6413ae85d5fba0d2fef8822758664194d4f1df781937591a684aabe9ec9c,PodSandboxId:299cb322be009f759f2ffc3b60c7f9b4281f8c034303ae8d9c7c1c9cfd692ed2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859450708544893,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72cb8cb71b12a9dd016202c6ee7de79a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f5bf350558c57eadea4484da5b43d0633125ef5339961e8a0b179f0f4d660f,PodSandboxId:83fb186f2e25d7f973e0915b3c020ba520ff9f155b0c2c61a74864f9e1b44992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859450618363481,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c427dc7b4fa266356a38b8def1c1cce91a76dd31495176c88816cd1310deed,PodSandboxId:b3bbf11fb11f2b91152f3dbaa44973c84519501dfdd75f2a6a5157de8b4232c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859163605757361,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56c8ef4f-e977-4ab9-9bf7-fe0b1461cbfa name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.292557121Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=9f8f9d58-7290-431c-bfca-7117894097be name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.292917145Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:724fd81f1433ad7db0bbc12374f7c8c87d1e54f9afa03073dba5051dd93f7b8d,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-rpfqm,Uid:ba7c8518-6c3e-4751-a9a5-29c77990a29c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859461676761905,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-rpfqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba7c8518-6c3e-4751-a9a5-29c77990a29c,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T19:11:01.369208329Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5c5864ca73b60ef9f89df1ef0451bc957e0ebdb49178a80c23078b851315309a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e7f05c0a-c6be-4e68-959e-966c17c9cc5e,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859461665623206,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f05c0a-c6be-4e68-959e-966c17c9cc5e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volu
mes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-20T19:11:01.347378442Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8e0201363b51353b38f798de986fb30d95024019a1f71029376b7641cdfb392f,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-h84nm,Uid:6ada3ba7-1ccd-474b-850b-c00a77dfbb92,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859461335211258,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-h84nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ada3ba7-1ccd-474b-850b-c00a77dfbb92,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T19:11:01.026775650Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8984188f2284180601ab0bef07fd03d01c0f41c8e650d91a06bd15bf331625af,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-gdfh9,Uid:61c6d6d8-62b9-4db3-
a3c3-fd0daec82a9f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859461307761381,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdfh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6d6d8-62b9-4db3-a3c3-fd0daec82a9f,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T19:11:01.000233285Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bc2edec3c4385ecac9ef4cfb1f527fe8422d93c09aaa0077df38f25faec360ab,Metadata:&PodSandboxMetadata{Name:kube-proxy-bvfqh,Uid:2170ef3f-58f0-4d42-9f15-d9c952e0e2ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859461086171214,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bvfqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2170ef3f-58f0-4d42-9f15-d9c952e0e2ec,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T19:11:00.778284718Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:766829d60dae13109dc74c598c065a06cac4eb91829b41d96c478098bd304244,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-037711,Uid:d3c3d500e11904aa0df64b9be940c73c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859450461173915,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c3d500e11904aa0df64b9be940c73c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d3c3d500e11904aa0df64b9be940c73c,kubernetes.io/config.seen: 2024-09-20T19:10:50.017676139Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:83fb186f2e25d7f973e0915b3c020ba520ff9f155b0c2c61a74864f9e1b44992,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no
-preload-037711,Uid:0bd0c2280ae4c661dfd29b6dea883efb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726859450457579317,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.136:8443,kubernetes.io/config.hash: 0bd0c2280ae4c661dfd29b6dea883efb,kubernetes.io/config.seen: 2024-09-20T19:10:50.017673939Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c5c7ac8990434e3448492ac229767311a5f3b425b397903688ae3cf776db9afd,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-037711,Uid:5ad8cc73a7e9abe289ec90a38b985562,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859450453139227,Labels:map[string]string{component: kube-controller-manager,io
.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad8cc73a7e9abe289ec90a38b985562,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5ad8cc73a7e9abe289ec90a38b985562,kubernetes.io/config.seen: 2024-09-20T19:10:50.017675086Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:299cb322be009f759f2ffc3b60c7f9b4281f8c034303ae8d9c7c1c9cfd692ed2,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-037711,Uid:72cb8cb71b12a9dd016202c6ee7de79a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726859450444448966,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72cb8cb71b12a9dd016202c6ee7de79a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.136:237
9,kubernetes.io/config.hash: 72cb8cb71b12a9dd016202c6ee7de79a,kubernetes.io/config.seen: 2024-09-20T19:10:50.017669685Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b3bbf11fb11f2b91152f3dbaa44973c84519501dfdd75f2a6a5157de8b4232c3,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-037711,Uid:0bd0c2280ae4c661dfd29b6dea883efb,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726859163434543771,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.136:8443,kubernetes.io/config.hash: 0bd0c2280ae4c661dfd29b6dea883efb,kubernetes.io/config.seen: 2024-09-20T19:06:02.953904724Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/inter
ceptors.go:74" id=9f8f9d58-7290-431c-bfca-7117894097be name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.293526954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1668d164-aa6a-4c10-9dca-8ba1a19e6b1b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.293589564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1668d164-aa6a-4c10-9dca-8ba1a19e6b1b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.293810396Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c3b2c73c79f03835b2d7f9bbcac7a9e7daab8399d4394c4f5e57edfe00b04ca,PodSandboxId:5c5864ca73b60ef9f89df1ef0451bc957e0ebdb49178a80c23078b851315309a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859462020503944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f05c0a-c6be-4e68-959e-966c17c9cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:490f881a4145983bd5f22b576485c4167b96e3d03a669a764d2b77d254dbf8c9,PodSandboxId:8e0201363b51353b38f798de986fb30d95024019a1f71029376b7641cdfb392f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461924533095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h84nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ada3ba7-1ccd-474b-850b-c00a77dfbb92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3ba4d23673d8beab73a909b9507fce9cc7b80319a2ae48cd1cfd7ea08e5886,PodSandboxId:8984188f2284180601ab0bef07fd03d01c0f41c8e650d91a06bd15bf331625af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461886672209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdfh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61
c6d6d8-62b9-4db3-a3c3-fd0daec82a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc3cf8747bd605b197ec515407d007bf6797f30c00b192fc7c04f5b68554df6,PodSandboxId:bc2edec3c4385ecac9ef4cfb1f527fe8422d93c09aaa0077df38f25faec360ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726859461267199152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvfqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2170ef3f-58f0-4d42-9f15-d9c952e0e2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa21f43834f66104e826a833971dede5d642811a29cb6fa3b34b5bfe378a890,PodSandboxId:766829d60dae13109dc74c598c065a06cac4eb91829b41d96c478098bd304244,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859450705134369,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c3d500e11904aa0df64b9be940c73c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782f8908af7306fd228aadd0d307c4db0a42502fea560906311314ac5c6e0b68,PodSandboxId:c5c7ac8990434e3448492ac229767311a5f3b425b397903688ae3cf776db9afd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859450661715750,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad8cc73a7e9abe289ec90a38b985562,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3e6413ae85d5fba0d2fef8822758664194d4f1df781937591a684aabe9ec9c,PodSandboxId:299cb322be009f759f2ffc3b60c7f9b4281f8c034303ae8d9c7c1c9cfd692ed2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859450708544893,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72cb8cb71b12a9dd016202c6ee7de79a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f5bf350558c57eadea4484da5b43d0633125ef5339961e8a0b179f0f4d660f,PodSandboxId:83fb186f2e25d7f973e0915b3c020ba520ff9f155b0c2c61a74864f9e1b44992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859450618363481,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c427dc7b4fa266356a38b8def1c1cce91a76dd31495176c88816cd1310deed,PodSandboxId:b3bbf11fb11f2b91152f3dbaa44973c84519501dfdd75f2a6a5157de8b4232c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859163605757361,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1668d164-aa6a-4c10-9dca-8ba1a19e6b1b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.315399059Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c94a194-2ec3-4e2d-83d2-f8d78fc8c965 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.315487847Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c94a194-2ec3-4e2d-83d2-f8d78fc8c965 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.317002563Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca9d7cfd-ce0a-4e7f-9b60-5dff355ec8bb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.317388972Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860344317365904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca9d7cfd-ce0a-4e7f-9b60-5dff355ec8bb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.318163349Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04793349-66f0-4558-b4ec-eeeb914720a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.318230672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04793349-66f0-4558-b4ec-eeeb914720a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:44 no-preload-037711 crio[701]: time="2024-09-20 19:25:44.318456524Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c3b2c73c79f03835b2d7f9bbcac7a9e7daab8399d4394c4f5e57edfe00b04ca,PodSandboxId:5c5864ca73b60ef9f89df1ef0451bc957e0ebdb49178a80c23078b851315309a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726859462020503944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f05c0a-c6be-4e68-959e-966c17c9cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:490f881a4145983bd5f22b576485c4167b96e3d03a669a764d2b77d254dbf8c9,PodSandboxId:8e0201363b51353b38f798de986fb30d95024019a1f71029376b7641cdfb392f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461924533095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h84nm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ada3ba7-1ccd-474b-850b-c00a77dfbb92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3ba4d23673d8beab73a909b9507fce9cc7b80319a2ae48cd1cfd7ea08e5886,PodSandboxId:8984188f2284180601ab0bef07fd03d01c0f41c8e650d91a06bd15bf331625af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726859461886672209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdfh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61
c6d6d8-62b9-4db3-a3c3-fd0daec82a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc3cf8747bd605b197ec515407d007bf6797f30c00b192fc7c04f5b68554df6,PodSandboxId:bc2edec3c4385ecac9ef4cfb1f527fe8422d93c09aaa0077df38f25faec360ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726859461267199152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvfqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2170ef3f-58f0-4d42-9f15-d9c952e0e2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa21f43834f66104e826a833971dede5d642811a29cb6fa3b34b5bfe378a890,PodSandboxId:766829d60dae13109dc74c598c065a06cac4eb91829b41d96c478098bd304244,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726859450705134369,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c3d500e11904aa0df64b9be940c73c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782f8908af7306fd228aadd0d307c4db0a42502fea560906311314ac5c6e0b68,PodSandboxId:c5c7ac8990434e3448492ac229767311a5f3b425b397903688ae3cf776db9afd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726859450661715750,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad8cc73a7e9abe289ec90a38b985562,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3e6413ae85d5fba0d2fef8822758664194d4f1df781937591a684aabe9ec9c,PodSandboxId:299cb322be009f759f2ffc3b60c7f9b4281f8c034303ae8d9c7c1c9cfd692ed2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726859450708544893,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72cb8cb71b12a9dd016202c6ee7de79a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f5bf350558c57eadea4484da5b43d0633125ef5339961e8a0b179f0f4d660f,PodSandboxId:83fb186f2e25d7f973e0915b3c020ba520ff9f155b0c2c61a74864f9e1b44992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726859450618363481,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c427dc7b4fa266356a38b8def1c1cce91a76dd31495176c88816cd1310deed,PodSandboxId:b3bbf11fb11f2b91152f3dbaa44973c84519501dfdd75f2a6a5157de8b4232c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859163605757361,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-037711,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bd0c2280ae4c661dfd29b6dea883efb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04793349-66f0-4558-b4ec-eeeb914720a0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c3b2c73c79f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   5c5864ca73b60       storage-provisioner
	490f881a41459       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   8e0201363b513       coredns-7c65d6cfc9-h84nm
	4b3ba4d23673d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   8984188f22841       coredns-7c65d6cfc9-gdfh9
	7cc3cf8747bd6       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   14 minutes ago      Running             kube-proxy                0                   bc2edec3c4385       kube-proxy-bvfqh
	2c3e6413ae85d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   299cb322be009       etcd-no-preload-037711
	3fa21f43834f6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   14 minutes ago      Running             kube-scheduler            2                   766829d60dae1       kube-scheduler-no-preload-037711
	782f8908af730       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   14 minutes ago      Running             kube-controller-manager   2                   c5c7ac8990434       kube-controller-manager-no-preload-037711
	14f5bf350558c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Running             kube-apiserver            2                   83fb186f2e25d       kube-apiserver-no-preload-037711
	21c427dc7b4fa       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   19 minutes ago      Exited              kube-apiserver            1                   b3bbf11fb11f2       kube-apiserver-no-preload-037711
	
	
	==> coredns [490f881a4145983bd5f22b576485c4167b96e3d03a669a764d2b77d254dbf8c9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [4b3ba4d23673d8beab73a909b9507fce9cc7b80319a2ae48cd1cfd7ea08e5886] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-037711
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-037711
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=no-preload-037711
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T19_10_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:10:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-037711
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:25:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:21:18 +0000   Fri, 20 Sep 2024 19:10:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:21:18 +0000   Fri, 20 Sep 2024 19:10:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:21:18 +0000   Fri, 20 Sep 2024 19:10:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:21:18 +0000   Fri, 20 Sep 2024 19:10:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.136
	  Hostname:    no-preload-037711
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 87f8e7b26b6046a299dad16c24bc5fb5
	  System UUID:                87f8e7b2-6b60-46a2-99da-d16c24bc5fb5
	  Boot ID:                    f31ff828-9158-466e-b0c9-85bfb6a5fd29
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-gdfh9                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-h84nm                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-037711                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-037711             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-037711    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-bvfqh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-037711             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-rpfqm              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-037711 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-037711 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-037711 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-037711 event: Registered Node no-preload-037711 in Controller
	
	
	==> dmesg <==
	[  +0.059001] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041040] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.059321] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.957086] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.556378] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.961994] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.065549] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056560] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.183658] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.141063] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.276193] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[Sep20 19:06] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.065860] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.852843] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[  +4.628659] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.928437] kauditd_printk_skb: 90 callbacks suppressed
	[Sep20 19:10] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.341484] systemd-fstab-generator[2987]: Ignoring "noauto" option for root device
	[  +4.348387] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.695749] systemd-fstab-generator[3308]: Ignoring "noauto" option for root device
	[  +4.406032] systemd-fstab-generator[3410]: Ignoring "noauto" option for root device
	[  +0.096149] kauditd_printk_skb: 14 callbacks suppressed
	[Sep20 19:11] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [2c3e6413ae85d5fba0d2fef8822758664194d4f1df781937591a684aabe9ec9c] <==
	{"level":"info","ts":"2024-09-20T19:10:51.260785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f5f02872cabb0b8 switched to configuration voters=(4566371326770262200)"}
	{"level":"info","ts":"2024-09-20T19:10:51.260910Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"46ee4f926852f428","local-member-id":"3f5f02872cabb0b8","added-peer-id":"3f5f02872cabb0b8","added-peer-peer-urls":["https://192.168.61.136:2380"]}
	{"level":"info","ts":"2024-09-20T19:10:51.315920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f5f02872cabb0b8 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T19:10:51.316046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f5f02872cabb0b8 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T19:10:51.316095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f5f02872cabb0b8 received MsgPreVoteResp from 3f5f02872cabb0b8 at term 1"}
	{"level":"info","ts":"2024-09-20T19:10:51.316141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f5f02872cabb0b8 became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T19:10:51.316166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f5f02872cabb0b8 received MsgVoteResp from 3f5f02872cabb0b8 at term 2"}
	{"level":"info","ts":"2024-09-20T19:10:51.316194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f5f02872cabb0b8 became leader at term 2"}
	{"level":"info","ts":"2024-09-20T19:10:51.316219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3f5f02872cabb0b8 elected leader 3f5f02872cabb0b8 at term 2"}
	{"level":"info","ts":"2024-09-20T19:10:51.322017Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:10:51.326136Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3f5f02872cabb0b8","local-member-attributes":"{Name:no-preload-037711 ClientURLs:[https://192.168.61.136:2379]}","request-path":"/0/members/3f5f02872cabb0b8/attributes","cluster-id":"46ee4f926852f428","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T19:10:51.327036Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:10:51.327220Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T19:10:51.327275Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T19:10:51.329960Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"46ee4f926852f428","local-member-id":"3f5f02872cabb0b8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:10:51.330056Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:10:51.330101Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:10:51.330713Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:10:51.333680Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T19:10:51.327820Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:10:51.348752Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:10:51.355585Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.136:2379"}
	{"level":"info","ts":"2024-09-20T19:20:51.386905Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-09-20T19:20:51.396142Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":684,"took":"8.867128ms","hash":3941980792,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-09-20T19:20:51.396221Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3941980792,"revision":684,"compact-revision":-1}
	
	
	==> kernel <==
	 19:25:44 up 20 min,  0 users,  load average: 0.17, 0.16, 0.11
	Linux no-preload-037711 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [14f5bf350558c57eadea4484da5b43d0633125ef5339961e8a0b179f0f4d660f] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0920 19:20:54.197999       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:20:54.198196       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0920 19:20:54.199338       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:20:54.199422       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 19:21:54.200068       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:21:54.200470       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 19:21:54.200388       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:21:54.200644       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 19:21:54.201679       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:21:54.201761       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 19:23:54.202399       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:23:54.202903       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0920 19:23:54.202401       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:23:54.203088       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0920 19:23:54.204642       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:23:54.204698       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [21c427dc7b4fa266356a38b8def1c1cce91a76dd31495176c88816cd1310deed] <==
	W0920 19:10:43.500243       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.536320       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.536323       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.571974       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.614261       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.635180       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.694155       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.739228       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.745951       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.802792       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.865914       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.918390       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.989320       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.998999       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:43.999011       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:44.027755       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:44.035331       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:44.112795       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:44.142711       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:44.426135       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:45.900024       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:47.675613       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:47.989502       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:48.075493       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:10:48.165403       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [782f8908af7306fd228aadd0d307c4db0a42502fea560906311314ac5c6e0b68] <==
	E0920 19:20:30.242771       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:20:30.746740       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:21:00.249969       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:21:00.754493       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 19:21:18.484336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-037711"
	E0920 19:21:30.257734       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:21:30.768867       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:22:00.265131       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:22:00.778530       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 19:22:01.005329       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="231.813µs"
	I0920 19:22:13.004790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="149.497µs"
	E0920 19:22:30.270737       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:22:30.789312       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:23:00.279053       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:23:00.797319       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:23:30.285645       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:23:30.808995       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:24:00.292989       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:24:00.817480       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:24:30.299633       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:24:30.828474       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:25:00.307173       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:25:00.837537       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 19:25:30.315004       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 19:25:30.850104       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7cc3cf8747bd605b197ec515407d007bf6797f30c00b192fc7c04f5b68554df6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 19:11:02.259699       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 19:11:02.290659       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.136"]
	E0920 19:11:02.290770       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:11:02.355314       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 19:11:02.355369       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 19:11:02.355403       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:11:02.357714       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:11:02.358168       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:11:02.358413       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:11:02.360233       1 config.go:199] "Starting service config controller"
	I0920 19:11:02.360327       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:11:02.360429       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:11:02.360477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:11:02.361120       1 config.go:328] "Starting node config controller"
	I0920 19:11:02.361186       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:11:02.460995       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 19:11:02.461065       1 shared_informer.go:320] Caches are synced for service config
	I0920 19:11:02.461278       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3fa21f43834f66104e826a833971dede5d642811a29cb6fa3b34b5bfe378a890] <==
	W0920 19:10:53.218926       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 19:10:53.219355       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 19:10:53.219401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 19:10:53.219496       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.118047       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 19:10:54.118133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.131590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 19:10:54.131645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.200227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 19:10:54.200280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.352150       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 19:10:54.352953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.393946       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 19:10:54.393993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.407494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 19:10:54.407542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.411387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 19:10:54.411458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.501781       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 19:10:54.501892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.506334       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 19:10:54.506500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:10:54.705819       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 19:10:54.705897       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 19:10:57.008062       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:24:26 no-preload-037711 kubelet[3315]: E0920 19:24:26.262197    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860266260810054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:24:36 no-preload-037711 kubelet[3315]: E0920 19:24:36.264319    3315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860276263871683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:24:36 no-preload-037711 kubelet[3315]: E0920 19:24:36.264886    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860276263871683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:24:37 no-preload-037711 kubelet[3315]: E0920 19:24:37.989735    3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rpfqm" podUID="ba7c8518-6c3e-4751-a9a5-29c77990a29c"
	Sep 20 19:24:46 no-preload-037711 kubelet[3315]: E0920 19:24:46.266875    3315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860286266495687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:24:46 no-preload-037711 kubelet[3315]: E0920 19:24:46.267225    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860286266495687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:24:51 no-preload-037711 kubelet[3315]: E0920 19:24:51.989153    3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rpfqm" podUID="ba7c8518-6c3e-4751-a9a5-29c77990a29c"
	Sep 20 19:24:56 no-preload-037711 kubelet[3315]: E0920 19:24:56.026455    3315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 19:24:56 no-preload-037711 kubelet[3315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 19:24:56 no-preload-037711 kubelet[3315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 19:24:56 no-preload-037711 kubelet[3315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 19:24:56 no-preload-037711 kubelet[3315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 19:24:56 no-preload-037711 kubelet[3315]: E0920 19:24:56.269194    3315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860296268943292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:24:56 no-preload-037711 kubelet[3315]: E0920 19:24:56.269239    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860296268943292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:05 no-preload-037711 kubelet[3315]: E0920 19:25:05.989677    3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rpfqm" podUID="ba7c8518-6c3e-4751-a9a5-29c77990a29c"
	Sep 20 19:25:06 no-preload-037711 kubelet[3315]: E0920 19:25:06.271456    3315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860306271036527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:06 no-preload-037711 kubelet[3315]: E0920 19:25:06.271538    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860306271036527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:16 no-preload-037711 kubelet[3315]: E0920 19:25:16.275213    3315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860316272906142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:16 no-preload-037711 kubelet[3315]: E0920 19:25:16.275418    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860316272906142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:20 no-preload-037711 kubelet[3315]: E0920 19:25:20.989968    3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rpfqm" podUID="ba7c8518-6c3e-4751-a9a5-29c77990a29c"
	Sep 20 19:25:26 no-preload-037711 kubelet[3315]: E0920 19:25:26.276978    3315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860326276634801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:26 no-preload-037711 kubelet[3315]: E0920 19:25:26.277037    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860326276634801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:32 no-preload-037711 kubelet[3315]: E0920 19:25:32.990176    3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rpfqm" podUID="ba7c8518-6c3e-4751-a9a5-29c77990a29c"
	Sep 20 19:25:36 no-preload-037711 kubelet[3315]: E0920 19:25:36.277946    3315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860336277672237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:25:36 no-preload-037711 kubelet[3315]: E0920 19:25:36.277980    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860336277672237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [8c3b2c73c79f03835b2d7f9bbcac7a9e7daab8399d4394c4f5e57edfe00b04ca] <==
	I0920 19:11:02.303189       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 19:11:02.316278       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 19:11:02.320120       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 19:11:02.333563       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 19:11:02.334100       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24acebcb-4eea-4da5-80db-8fd1c1b18ecf", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-037711_f26a285c-8115-4e61-9cd0-a7b287203681 became leader
	I0920 19:11:02.336962       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-037711_f26a285c-8115-4e61-9cd0-a7b287203681!
	I0920 19:11:02.437703       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-037711_f26a285c-8115-4e61-9cd0-a7b287203681!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-037711 -n no-preload-037711
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-037711 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-rpfqm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-037711 describe pod metrics-server-6867b74b74-rpfqm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-037711 describe pod metrics-server-6867b74b74-rpfqm: exit status 1 (63.649824ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-rpfqm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-037711 describe pod metrics-server-6867b74b74-rpfqm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (330.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (165.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:23:04.096679  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:23:30.942549  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:23:33.963277  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:24:15.617863  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:24:48.377090  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
E0920 19:25:13.962968  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.53:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-425599 -n old-k8s-version-425599
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-425599 -n old-k8s-version-425599: exit status 2 (242.659107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-425599" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-425599 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-425599 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.208µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-425599 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599: exit status 2 (238.172248ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-425599 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-425599 logs -n 25: (1.685388049s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-793540 sudo cat                             | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo                                 | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo find                            | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-793540 sudo crio                            | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-793540                                      | flannel-793540               | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-896665 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | disable-driver-mounts-896665                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:57 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-037711             | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-339897            | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-612312  | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC |                     |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-037711                  | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-037711                                   | no-preload-037711            | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC | 20 Sep 24 19:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-339897                 | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-425599        | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-339897                                  | embed-certs-339897           | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612312       | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612312 | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:09 UTC |
	|         | default-k8s-diff-port-612312                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-425599             | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-425599                              | old-k8s-version-425599       | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:01:28
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:01:28.948776  303486 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:01:28.948894  303486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:01:28.948900  303486 out.go:358] Setting ErrFile to fd 2...
	I0920 19:01:28.948906  303486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:01:28.949090  303486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 19:01:28.949637  303486 out.go:352] Setting JSON to false
	I0920 19:01:28.950705  303486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9832,"bootTime":1726849057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:01:28.950809  303486 start.go:139] virtualization: kvm guest
	I0920 19:01:28.953226  303486 out.go:177] * [old-k8s-version-425599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:01:28.955013  303486 notify.go:220] Checking for updates...
	I0920 19:01:28.955065  303486 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 19:01:28.956932  303486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:01:28.959076  303486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:01:28.961116  303486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 19:01:28.963396  303486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:01:28.965428  303486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:01:28.967688  303486 config.go:182] Loaded profile config "old-k8s-version-425599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:01:28.968112  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:01:28.968175  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:01:28.984002  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
	I0920 19:01:28.984552  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:01:28.985260  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:01:28.985291  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:01:28.985715  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:01:28.985972  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:01:28.988070  303486 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 19:01:28.989565  303486 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:01:28.990007  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:01:28.990079  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:01:29.006020  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34059
	I0920 19:01:29.006492  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:01:29.007046  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:01:29.007078  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:01:29.007441  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:01:29.007706  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:01:29.049785  303486 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 19:01:29.051185  303486 start.go:297] selected driver: kvm2
	I0920 19:01:29.051206  303486 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:01:29.051323  303486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:01:29.052030  303486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:01:29.052131  303486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:01:29.068826  303486 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:01:29.069232  303486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:01:29.069262  303486 cni.go:84] Creating CNI manager for ""
	I0920 19:01:29.069297  303486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:01:29.069333  303486 start.go:340] cluster config:
	{Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:01:29.069439  303486 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:01:29.071617  303486 out.go:177] * Starting "old-k8s-version-425599" primary control-plane node in "old-k8s-version-425599" cluster
	I0920 19:01:27.086248  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:29.073133  303486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:01:29.073174  303486 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 19:01:29.073182  303486 cache.go:56] Caching tarball of preloaded images
	I0920 19:01:29.073269  303486 preload.go:172] Found /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:01:29.073285  303486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 19:01:29.073388  303486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 19:01:29.073573  303486 start.go:360] acquireMachinesLock for old-k8s-version-425599: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:01:33.166258  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:36.238261  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:42.318195  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:45.390223  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:51.470272  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:01:54.542277  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:00.622232  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:03.694275  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:09.774241  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:12.846248  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:18.926213  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:21.998195  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:28.078192  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:31.150239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:37.230160  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:40.302224  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:46.382225  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:49.454205  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:55.534186  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:02:58.606232  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:04.686254  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:07.758234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:13.838239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:16.910321  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:22.990234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:26.062339  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:32.142210  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:35.214256  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:41.294234  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:44.366288  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:50.446215  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:53.518266  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:03:59.598190  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:02.670240  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:08.750179  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:11.822239  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:17.902176  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:20.974235  302538 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.136:22: connect: no route to host
	I0920 19:04:23.977804  302869 start.go:364] duration metric: took 4m19.519175605s to acquireMachinesLock for "embed-certs-339897"
	I0920 19:04:23.977868  302869 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:04:23.977876  302869 fix.go:54] fixHost starting: 
	I0920 19:04:23.978233  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:04:23.978277  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:04:23.993804  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0920 19:04:23.994326  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:04:23.994906  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:04:23.994925  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:04:23.995219  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:04:23.995413  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:23.995575  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:04:23.997417  302869 fix.go:112] recreateIfNeeded on embed-certs-339897: state=Stopped err=<nil>
	I0920 19:04:23.997439  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	W0920 19:04:23.997636  302869 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:04:24.001021  302869 out.go:177] * Restarting existing kvm2 VM for "embed-certs-339897" ...
	I0920 19:04:24.002636  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Start
	I0920 19:04:24.002842  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring networks are active...
	I0920 19:04:24.003916  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring network default is active
	I0920 19:04:24.004282  302869 main.go:141] libmachine: (embed-certs-339897) Ensuring network mk-embed-certs-339897 is active
	I0920 19:04:24.004647  302869 main.go:141] libmachine: (embed-certs-339897) Getting domain xml...
	I0920 19:04:24.005446  302869 main.go:141] libmachine: (embed-certs-339897) Creating domain...
	I0920 19:04:23.975096  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:04:23.975155  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:04:23.975457  302538 buildroot.go:166] provisioning hostname "no-preload-037711"
	I0920 19:04:23.975485  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:04:23.975712  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:04:23.977607  302538 machine.go:96] duration metric: took 4m37.412034117s to provisionDockerMachine
	I0920 19:04:23.977703  302538 fix.go:56] duration metric: took 4m37.437032108s for fixHost
	I0920 19:04:23.977718  302538 start.go:83] releasing machines lock for "no-preload-037711", held for 4m37.437081737s
	W0920 19:04:23.977745  302538 start.go:714] error starting host: provision: host is not running
	W0920 19:04:23.977850  302538 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 19:04:23.977859  302538 start.go:729] Will try again in 5 seconds ...
	I0920 19:04:25.258221  302869 main.go:141] libmachine: (embed-certs-339897) Waiting to get IP...
	I0920 19:04:25.259119  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.259493  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.259584  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.259481  304091 retry.go:31] will retry after 212.462393ms: waiting for machine to come up
	I0920 19:04:25.474057  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.474524  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.474564  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.474441  304091 retry.go:31] will retry after 306.01691ms: waiting for machine to come up
	I0920 19:04:25.782264  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:25.782729  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:25.782753  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:25.782706  304091 retry.go:31] will retry after 416.637796ms: waiting for machine to come up
	I0920 19:04:26.201336  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:26.201704  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:26.201738  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:26.201645  304091 retry.go:31] will retry after 583.373452ms: waiting for machine to come up
	I0920 19:04:26.786448  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:26.786854  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:26.786876  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:26.786807  304091 retry.go:31] will retry after 760.706965ms: waiting for machine to come up
	I0920 19:04:27.548786  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:27.549126  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:27.549149  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:27.549088  304091 retry.go:31] will retry after 615.829194ms: waiting for machine to come up
	I0920 19:04:28.167061  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:28.167601  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:28.167647  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:28.167419  304091 retry.go:31] will retry after 786.700064ms: waiting for machine to come up
	I0920 19:04:28.955294  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:28.955658  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:28.955685  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:28.955611  304091 retry.go:31] will retry after 1.309567829s: waiting for machine to come up
	I0920 19:04:28.979506  302538 start.go:360] acquireMachinesLock for no-preload-037711: {Name:mk82bb1df7851a4709a11d77a00ab21beb32e067 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:04:30.267104  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:30.267645  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:30.267676  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:30.267583  304091 retry.go:31] will retry after 1.153396834s: waiting for machine to come up
	I0920 19:04:31.423030  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:31.423604  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:31.423629  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:31.423542  304091 retry.go:31] will retry after 1.858288741s: waiting for machine to come up
	I0920 19:04:33.284886  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:33.285381  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:33.285419  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:33.285334  304091 retry.go:31] will retry after 2.343802005s: waiting for machine to come up
	I0920 19:04:35.630962  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:35.631408  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:35.631439  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:35.631359  304091 retry.go:31] will retry after 2.42254126s: waiting for machine to come up
	I0920 19:04:38.055128  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:38.055796  302869 main.go:141] libmachine: (embed-certs-339897) DBG | unable to find current IP address of domain embed-certs-339897 in network mk-embed-certs-339897
	I0920 19:04:38.055854  302869 main.go:141] libmachine: (embed-certs-339897) DBG | I0920 19:04:38.055732  304091 retry.go:31] will retry after 3.877296828s: waiting for machine to come up
	I0920 19:04:43.362725  303063 start.go:364] duration metric: took 4m20.211671699s to acquireMachinesLock for "default-k8s-diff-port-612312"
	I0920 19:04:43.362794  303063 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:04:43.362810  303063 fix.go:54] fixHost starting: 
	I0920 19:04:43.363257  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:04:43.363315  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:04:43.380877  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34751
	I0920 19:04:43.381399  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:04:43.381894  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:04:43.381933  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:04:43.382364  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:04:43.382596  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:04:43.382746  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:04:43.384351  303063 fix.go:112] recreateIfNeeded on default-k8s-diff-port-612312: state=Stopped err=<nil>
	I0920 19:04:43.384379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	W0920 19:04:43.384540  303063 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:04:43.386969  303063 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-612312" ...
	I0920 19:04:41.936215  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.936789  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has current primary IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.936811  302869 main.go:141] libmachine: (embed-certs-339897) Found IP for machine: 192.168.72.72
	I0920 19:04:41.936823  302869 main.go:141] libmachine: (embed-certs-339897) Reserving static IP address...
	I0920 19:04:41.937386  302869 main.go:141] libmachine: (embed-certs-339897) Reserved static IP address: 192.168.72.72
	I0920 19:04:41.937412  302869 main.go:141] libmachine: (embed-certs-339897) Waiting for SSH to be available...
	I0920 19:04:41.937435  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "embed-certs-339897", mac: "52:54:00:dc:b1:41", ip: "192.168.72.72"} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:41.937466  302869 main.go:141] libmachine: (embed-certs-339897) DBG | skip adding static IP to network mk-embed-certs-339897 - found existing host DHCP lease matching {name: "embed-certs-339897", mac: "52:54:00:dc:b1:41", ip: "192.168.72.72"}
	I0920 19:04:41.937481  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Getting to WaitForSSH function...
	I0920 19:04:41.939673  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.940065  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:41.940089  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:41.940196  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Using SSH client type: external
	I0920 19:04:41.940223  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa (-rw-------)
	I0920 19:04:41.940261  302869 main.go:141] libmachine: (embed-certs-339897) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:04:41.940274  302869 main.go:141] libmachine: (embed-certs-339897) DBG | About to run SSH command:
	I0920 19:04:41.940285  302869 main.go:141] libmachine: (embed-certs-339897) DBG | exit 0
	I0920 19:04:42.065967  302869 main.go:141] libmachine: (embed-certs-339897) DBG | SSH cmd err, output: <nil>: 
	I0920 19:04:42.066357  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetConfigRaw
	I0920 19:04:42.067004  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:42.069586  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.069937  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.069968  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.070208  302869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/config.json ...
	I0920 19:04:42.070452  302869 machine.go:93] provisionDockerMachine start ...
	I0920 19:04:42.070478  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:42.070687  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.072878  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.073340  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.073375  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.073501  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.073701  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.073899  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.074080  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.074254  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.074504  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.074523  302869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:04:42.182250  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:04:42.182287  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.182543  302869 buildroot.go:166] provisioning hostname "embed-certs-339897"
	I0920 19:04:42.182570  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.182818  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.185497  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.185850  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.185886  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.186069  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.186274  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.186421  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.186568  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.186770  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.186986  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.187006  302869 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-339897 && echo "embed-certs-339897" | sudo tee /etc/hostname
	I0920 19:04:42.307656  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-339897
	
	I0920 19:04:42.307700  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.310572  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.310943  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.310970  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.311184  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.311382  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.311534  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.311663  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.311810  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.311984  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.312003  302869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-339897' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-339897/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-339897' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:04:42.426403  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:04:42.426440  302869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:04:42.426493  302869 buildroot.go:174] setting up certificates
	I0920 19:04:42.426502  302869 provision.go:84] configureAuth start
	I0920 19:04:42.426513  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetMachineName
	I0920 19:04:42.426822  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:42.429708  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.430134  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.430170  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.430328  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.432799  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.433222  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.433251  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.433383  302869 provision.go:143] copyHostCerts
	I0920 19:04:42.433466  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:04:42.433487  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:04:42.433549  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:04:42.433644  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:04:42.433652  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:04:42.433678  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:04:42.433735  302869 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:04:42.433742  302869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:04:42.433762  302869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:04:42.433811  302869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.embed-certs-339897 san=[127.0.0.1 192.168.72.72 embed-certs-339897 localhost minikube]
	I0920 19:04:42.745528  302869 provision.go:177] copyRemoteCerts
	I0920 19:04:42.745599  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:04:42.745633  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.748247  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.748587  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.748619  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.748811  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.749014  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.749201  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.749334  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:42.831927  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:04:42.855674  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:04:42.879114  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 19:04:42.902982  302869 provision.go:87] duration metric: took 476.462339ms to configureAuth
	I0920 19:04:42.903019  302869 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:04:42.903236  302869 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:04:42.903321  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:42.906208  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.906580  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:42.906613  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:42.906810  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:42.907006  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.907136  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:42.907263  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:42.907427  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:42.907601  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:42.907616  302869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:04:43.127800  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:04:43.127847  302869 machine.go:96] duration metric: took 1.057372659s to provisionDockerMachine
	I0920 19:04:43.127864  302869 start.go:293] postStartSetup for "embed-certs-339897" (driver="kvm2")
	I0920 19:04:43.127890  302869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:04:43.127917  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.128263  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:04:43.128298  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.131648  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.132138  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.132173  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.132340  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.132560  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.132739  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.132896  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.216646  302869 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:04:43.220513  302869 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:04:43.220548  302869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:04:43.220629  302869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:04:43.220734  302869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:04:43.220862  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:04:43.230506  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:04:43.252894  302869 start.go:296] duration metric: took 125.003067ms for postStartSetup
	I0920 19:04:43.252943  302869 fix.go:56] duration metric: took 19.275066559s for fixHost
	I0920 19:04:43.252971  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.255999  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.256378  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.256406  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.256634  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.256858  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.257047  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.257214  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.257382  302869 main.go:141] libmachine: Using SSH client type: native
	I0920 19:04:43.257546  302869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.72 22 <nil> <nil>}
	I0920 19:04:43.257556  302869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:04:43.362516  302869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859083.339291891
	
	I0920 19:04:43.362545  302869 fix.go:216] guest clock: 1726859083.339291891
	I0920 19:04:43.362553  302869 fix.go:229] Guest: 2024-09-20 19:04:43.339291891 +0000 UTC Remote: 2024-09-20 19:04:43.25294824 +0000 UTC m=+278.942139838 (delta=86.343651ms)
	I0920 19:04:43.362585  302869 fix.go:200] guest clock delta is within tolerance: 86.343651ms
	I0920 19:04:43.362591  302869 start.go:83] releasing machines lock for "embed-certs-339897", held for 19.38474105s
	I0920 19:04:43.362620  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.362970  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:43.365988  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.366359  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.366380  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.366610  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367130  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367326  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:04:43.367423  302869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:04:43.367469  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.367602  302869 ssh_runner.go:195] Run: cat /version.json
	I0920 19:04:43.367628  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:04:43.370233  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370594  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.370624  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370649  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.370804  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.370998  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.371169  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:43.371191  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:43.371249  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.371406  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:04:43.371470  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.371566  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:04:43.371721  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:04:43.371885  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:04:43.490023  302869 ssh_runner.go:195] Run: systemctl --version
	I0920 19:04:43.496615  302869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:04:43.643493  302869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:04:43.649492  302869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:04:43.649560  302869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:04:43.665423  302869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:04:43.665460  302869 start.go:495] detecting cgroup driver to use...
	I0920 19:04:43.665530  302869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:04:43.681288  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:04:43.695161  302869 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:04:43.695218  302869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:04:43.708772  302869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:04:43.722803  302869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:04:43.834054  302869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:04:43.966014  302869 docker.go:233] disabling docker service ...
	I0920 19:04:43.966102  302869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:04:43.982324  302869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:04:43.995351  302869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:04:44.135635  302869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:04:44.262661  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:04:44.277377  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:04:44.299889  302869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:04:44.299965  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.312434  302869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:04:44.312534  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.323052  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.333504  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.343704  302869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:04:44.354386  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.364308  302869 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.383581  302869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:04:44.395013  302869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:04:44.405227  302869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:04:44.405279  302869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:04:44.418685  302869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:04:44.431323  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:04:44.558582  302869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:04:44.644003  302869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:04:44.644091  302869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:04:44.649434  302869 start.go:563] Will wait 60s for crictl version
	I0920 19:04:44.649498  302869 ssh_runner.go:195] Run: which crictl
	I0920 19:04:44.653334  302869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:04:44.695896  302869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:04:44.696004  302869 ssh_runner.go:195] Run: crio --version
	I0920 19:04:44.726148  302869 ssh_runner.go:195] Run: crio --version
	I0920 19:04:44.757340  302869 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:04:43.388378  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Start
	I0920 19:04:43.388603  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring networks are active...
	I0920 19:04:43.389387  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring network default is active
	I0920 19:04:43.389863  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Ensuring network mk-default-k8s-diff-port-612312 is active
	I0920 19:04:43.390364  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Getting domain xml...
	I0920 19:04:43.391121  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Creating domain...
	I0920 19:04:44.718004  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting to get IP...
	I0920 19:04:44.718885  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.719317  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.719413  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:44.719288  304227 retry.go:31] will retry after 197.63251ms: waiting for machine to come up
	I0920 19:04:44.919026  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.919516  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:44.919547  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:44.919475  304227 retry.go:31] will retry after 305.409091ms: waiting for machine to come up
	I0920 19:04:45.227550  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.228191  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.228224  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:45.228147  304227 retry.go:31] will retry after 311.72219ms: waiting for machine to come up
	I0920 19:04:45.541945  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.542374  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:45.542403  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:45.542344  304227 retry.go:31] will retry after 547.369471ms: waiting for machine to come up
	I0920 19:04:46.091199  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.091731  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.091765  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:46.091693  304227 retry.go:31] will retry after 519.190971ms: waiting for machine to come up
	I0920 19:04:46.612175  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.612641  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:46.612672  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:46.612591  304227 retry.go:31] will retry after 715.908704ms: waiting for machine to come up
	I0920 19:04:47.330911  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:47.331350  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:47.331379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:47.331294  304227 retry.go:31] will retry after 898.358136ms: waiting for machine to come up
	I0920 19:04:44.759090  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetIP
	I0920 19:04:44.762331  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:44.762696  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:04:44.762728  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:04:44.762954  302869 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 19:04:44.767209  302869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:04:44.781327  302869 kubeadm.go:883] updating cluster {Name:embed-certs-339897 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:04:44.781465  302869 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:04:44.781512  302869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:04:44.817356  302869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:04:44.817422  302869 ssh_runner.go:195] Run: which lz4
	I0920 19:04:44.821534  302869 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:04:44.826169  302869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:04:44.826205  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 19:04:46.160290  302869 crio.go:462] duration metric: took 1.338795677s to copy over tarball
	I0920 19:04:46.160379  302869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:04:48.265535  302869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.105118482s)
	I0920 19:04:48.265580  302869 crio.go:469] duration metric: took 2.105250135s to extract the tarball
	I0920 19:04:48.265588  302869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:04:48.302529  302869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:04:48.346391  302869 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:04:48.346419  302869 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:04:48.346427  302869 kubeadm.go:934] updating node { 192.168.72.72 8443 v1.31.1 crio true true} ...
	I0920 19:04:48.346566  302869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-339897 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:04:48.346668  302869 ssh_runner.go:195] Run: crio config
	I0920 19:04:48.396798  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:04:48.396824  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:04:48.396834  302869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:04:48.396866  302869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.72 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-339897 NodeName:embed-certs-339897 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:04:48.397043  302869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-339897"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:04:48.397121  302869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:04:48.407031  302869 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:04:48.407118  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:04:48.416554  302869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 19:04:48.432540  302869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:04:48.448042  302869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0920 19:04:48.465193  302869 ssh_runner.go:195] Run: grep 192.168.72.72	control-plane.minikube.internal$ /etc/hosts
	I0920 19:04:48.469083  302869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:04:48.481123  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:04:48.609883  302869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:04:48.627512  302869 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897 for IP: 192.168.72.72
	I0920 19:04:48.627545  302869 certs.go:194] generating shared ca certs ...
	I0920 19:04:48.627571  302869 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:04:48.627784  302869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:04:48.627851  302869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:04:48.627866  302869 certs.go:256] generating profile certs ...
	I0920 19:04:48.628032  302869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/client.key
	I0920 19:04:48.628143  302869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.key.308547ed
	I0920 19:04:48.628206  302869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.key
	I0920 19:04:48.628375  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:04:48.628421  302869 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:04:48.628435  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:04:48.628470  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:04:48.628509  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:04:48.628542  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:04:48.628616  302869 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:04:48.629569  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:04:48.656203  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:04:48.708322  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:04:48.737686  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:04:48.772198  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 19:04:48.812086  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:04:48.836038  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:04:48.859972  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/embed-certs-339897/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:04:48.883881  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:04:48.908399  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:04:48.930787  302869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:04:48.954052  302869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:04:48.970257  302869 ssh_runner.go:195] Run: openssl version
	I0920 19:04:48.976072  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:04:48.986449  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.990765  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.990833  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:04:48.996437  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:04:49.007111  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:04:49.017548  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.022044  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.022108  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:04:49.027752  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:04:49.038538  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:04:49.049445  302869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.054018  302869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.054100  302869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:04:49.059842  302869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:04:49.070748  302869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:04:49.075195  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:04:49.081100  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:04:49.086844  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:04:49.092790  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:04:49.098664  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:04:49.104562  302869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:04:49.110818  302869 kubeadm.go:392] StartCluster: {Name:embed-certs-339897 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-339897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:04:49.110952  302869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:04:49.111003  302869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:04:49.157700  302869 cri.go:89] found id: ""
	I0920 19:04:49.157774  302869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:04:49.168314  302869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:04:49.168339  302869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:04:49.168385  302869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:04:49.178632  302869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:04:49.179681  302869 kubeconfig.go:125] found "embed-certs-339897" server: "https://192.168.72.72:8443"
	I0920 19:04:49.181624  302869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:04:49.192084  302869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.72
	I0920 19:04:49.192159  302869 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:04:49.192188  302869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:04:49.192265  302869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:04:49.229141  302869 cri.go:89] found id: ""
	I0920 19:04:49.229232  302869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:04:49.247628  302869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:04:49.258190  302869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:04:49.258211  302869 kubeadm.go:157] found existing configuration files:
	
	I0920 19:04:49.258270  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:04:49.267769  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:04:49.267837  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:04:49.277473  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:04:49.286639  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:04:49.286712  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:04:49.296295  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:04:49.305705  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:04:49.305787  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:04:49.315191  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:04:49.324206  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:04:49.324288  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:04:49.334065  302869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:04:49.344823  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:48.231405  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:48.231846  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:48.231872  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:48.231795  304227 retry.go:31] will retry after 1.105264539s: waiting for machine to come up
	I0920 19:04:49.338940  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:49.339413  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:49.339437  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:49.339366  304227 retry.go:31] will retry after 1.638536651s: waiting for machine to come up
	I0920 19:04:50.980320  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:50.980774  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:50.980805  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:50.980714  304227 retry.go:31] will retry after 2.064766522s: waiting for machine to come up
	I0920 19:04:49.450454  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.412643  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.629144  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.694547  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:50.756897  302869 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:04:50.757008  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:51.258120  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:51.758025  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.258040  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.757302  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:04:52.774867  302869 api_server.go:72] duration metric: took 2.017964832s to wait for apiserver process to appear ...
	I0920 19:04:52.774906  302869 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:04:52.774954  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.383214  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:04:55.383255  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:04:55.383272  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.406625  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:04:55.406660  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:04:55.775825  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:55.785126  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:04:55.785157  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:04:56.275864  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:56.284002  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:04:56.284032  302869 api_server.go:103] status: https://192.168.72.72:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:04:56.775547  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:04:56.779999  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 200:
	ok
	I0920 19:04:56.786034  302869 api_server.go:141] control plane version: v1.31.1
	I0920 19:04:56.786066  302869 api_server.go:131] duration metric: took 4.011153019s to wait for apiserver health ...
	I0920 19:04:56.786076  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:04:56.786082  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:04:56.788195  302869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:04:53.047487  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:53.048005  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:53.048027  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:53.047958  304227 retry.go:31] will retry after 2.829648578s: waiting for machine to come up
	I0920 19:04:55.879069  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:55.879538  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:55.879562  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:55.879488  304227 retry.go:31] will retry after 3.029828813s: waiting for machine to come up
	I0920 19:04:56.789703  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:04:56.799605  302869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:04:56.816974  302869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:04:56.828470  302869 system_pods.go:59] 8 kube-system pods found
	I0920 19:04:56.828582  302869 system_pods.go:61] "coredns-7c65d6cfc9-xnfsk" [5e34a8b9-d748-484a-92ab-0d288ab5f35e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:04:56.828610  302869 system_pods.go:61] "etcd-embed-certs-339897" [1d0e8303-0ab9-418c-ba2d-f0ba33abad36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:04:56.828637  302869 system_pods.go:61] "kube-apiserver-embed-certs-339897" [35569778-54b1-456d-8822-5a53a5e336fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:04:56.828655  302869 system_pods.go:61] "kube-controller-manager-embed-certs-339897" [6b9db655-59a1-4975-b3c7-fcc29a912850] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:04:56.828677  302869 system_pods.go:61] "kube-proxy-xs4nd" [a32f4c96-ae6e-4402-89c5-0226a4412d17] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:04:56.828694  302869 system_pods.go:61] "kube-scheduler-embed-certs-339897" [81dd07df-2ba9-4f8e-bb16-263bd6496a0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:04:56.828716  302869 system_pods.go:61] "metrics-server-6867b74b74-qqhcw" [b720a331-05ef-4528-bd25-0c1e7ef66b16] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:04:56.828729  302869 system_pods.go:61] "storage-provisioner" [08674813-f61d-49e9-a714-5f38b95f058e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:04:56.828738  302869 system_pods.go:74] duration metric: took 11.732519ms to wait for pod list to return data ...
	I0920 19:04:56.828748  302869 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:04:56.835747  302869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:04:56.835786  302869 node_conditions.go:123] node cpu capacity is 2
	I0920 19:04:56.835799  302869 node_conditions.go:105] duration metric: took 7.044914ms to run NodePressure ...
	I0920 19:04:56.835822  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:04:57.221422  302869 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:04:57.225575  302869 kubeadm.go:739] kubelet initialised
	I0920 19:04:57.225601  302869 kubeadm.go:740] duration metric: took 4.150722ms waiting for restarted kubelet to initialise ...
	I0920 19:04:57.225610  302869 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:04:57.230469  302869 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace to be "Ready" ...
	I0920 19:04:59.237961  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:58.911412  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:04:58.911990  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | unable to find current IP address of domain default-k8s-diff-port-612312 in network mk-default-k8s-diff-port-612312
	I0920 19:04:58.912020  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | I0920 19:04:58.911956  304227 retry.go:31] will retry after 3.428044067s: waiting for machine to come up
	I0920 19:05:02.343216  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.343633  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Found IP for machine: 192.168.50.230
	I0920 19:05:02.343668  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has current primary IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.343679  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Reserving static IP address...
	I0920 19:05:02.344038  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Reserved static IP address: 192.168.50.230
	I0920 19:05:02.344084  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-612312", mac: "52:54:00:fa:2b:63", ip: "192.168.50.230"} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.344097  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Waiting for SSH to be available...
	I0920 19:05:02.344123  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | skip adding static IP to network mk-default-k8s-diff-port-612312 - found existing host DHCP lease matching {name: "default-k8s-diff-port-612312", mac: "52:54:00:fa:2b:63", ip: "192.168.50.230"}
	I0920 19:05:02.344136  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Getting to WaitForSSH function...
	I0920 19:05:02.346591  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.346932  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.346957  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.347110  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Using SSH client type: external
	I0920 19:05:02.347157  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa (-rw-------)
	I0920 19:05:02.347194  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:02.347214  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | About to run SSH command:
	I0920 19:05:02.347227  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | exit 0
	I0920 19:05:02.474040  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:02.474475  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetConfigRaw
	I0920 19:05:02.475160  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:02.477963  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.478338  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.478361  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.478680  303063 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/config.json ...
	I0920 19:05:02.478923  303063 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:02.478949  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:02.479166  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.481380  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.481759  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.481797  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.481961  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.482149  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.482307  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.482458  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.482619  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.482883  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.482900  303063 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:02.586360  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:02.586395  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.586694  303063 buildroot.go:166] provisioning hostname "default-k8s-diff-port-612312"
	I0920 19:05:02.586720  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.586951  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.589692  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.590053  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.590080  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.590230  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.590420  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.590563  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.590722  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.590936  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.591112  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.591126  303063 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-612312 && echo "default-k8s-diff-port-612312" | sudo tee /etc/hostname
	I0920 19:05:02.707768  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-612312
	
	I0920 19:05:02.707799  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.710647  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.711035  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.711064  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.711234  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.711448  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.711622  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.711791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.711938  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:02.712098  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:02.712116  303063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-612312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-612312/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-612312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:02.828234  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:02.828274  303063 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:02.828314  303063 buildroot.go:174] setting up certificates
	I0920 19:05:02.828327  303063 provision.go:84] configureAuth start
	I0920 19:05:02.828340  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetMachineName
	I0920 19:05:02.828700  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:02.831997  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.832469  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.832503  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.832704  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.835280  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.835577  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.835608  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.835699  303063 provision.go:143] copyHostCerts
	I0920 19:05:02.835766  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:02.835787  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:02.835848  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:02.835947  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:02.835955  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:02.835975  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:02.836026  303063 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:02.836033  303063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:02.836055  303063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:02.836103  303063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-612312 san=[127.0.0.1 192.168.50.230 default-k8s-diff-port-612312 localhost minikube]
	I0920 19:05:02.983437  303063 provision.go:177] copyRemoteCerts
	I0920 19:05:02.983510  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:02.983541  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:02.986435  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.986791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:02.986835  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:02.987110  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:02.987289  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:02.987438  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:02.987579  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.674961  303486 start.go:364] duration metric: took 3m34.601349843s to acquireMachinesLock for "old-k8s-version-425599"
	I0920 19:05:03.675039  303486 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:05:03.675048  303486 fix.go:54] fixHost starting: 
	I0920 19:05:03.675480  303486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:03.675541  303486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:03.694201  303486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
	I0920 19:05:03.694642  303486 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:03.695198  303486 main.go:141] libmachine: Using API Version  1
	I0920 19:05:03.695221  303486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:03.695609  303486 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:03.695793  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:03.695935  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetState
	I0920 19:05:03.697838  303486 fix.go:112] recreateIfNeeded on old-k8s-version-425599: state=Stopped err=<nil>
	I0920 19:05:03.697885  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	W0920 19:05:03.698080  303486 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:05:03.700333  303486 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-425599" ...
	I0920 19:05:03.701947  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .Start
	I0920 19:05:03.702184  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring networks are active...
	I0920 19:05:03.703106  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network default is active
	I0920 19:05:03.703645  303486 main.go:141] libmachine: (old-k8s-version-425599) Ensuring network mk-old-k8s-version-425599 is active
	I0920 19:05:03.704152  303486 main.go:141] libmachine: (old-k8s-version-425599) Getting domain xml...
	I0920 19:05:03.704942  303486 main.go:141] libmachine: (old-k8s-version-425599) Creating domain...
	I0920 19:05:01.738488  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:03.238934  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:03.238968  302869 pod_ready.go:82] duration metric: took 6.008471722s for pod "coredns-7c65d6cfc9-xnfsk" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.238978  302869 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.746041  302869 pod_ready.go:93] pod "etcd-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:03.746069  302869 pod_ready.go:82] duration metric: took 507.084418ms for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.746078  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:03.072306  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 19:05:03.096078  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:03.122027  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:03.150314  303063 provision.go:87] duration metric: took 321.970593ms to configureAuth
	I0920 19:05:03.150345  303063 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:03.150557  303063 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:03.150650  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.153988  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.154472  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.154524  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.154631  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.154840  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.155194  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.155397  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.155741  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:03.155990  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:03.156011  303063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:03.417981  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:03.418020  303063 machine.go:96] duration metric: took 939.078754ms to provisionDockerMachine
	I0920 19:05:03.418038  303063 start.go:293] postStartSetup for "default-k8s-diff-port-612312" (driver="kvm2")
	I0920 19:05:03.418052  303063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:03.418083  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.418456  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:03.418496  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.421689  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.422245  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.422282  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.422539  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.422747  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.422991  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.423144  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.509122  303063 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:03.515233  303063 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:03.515263  303063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:03.515343  303063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:03.515441  303063 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:03.515561  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:03.529346  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:03.559267  303063 start.go:296] duration metric: took 141.209592ms for postStartSetup
	I0920 19:05:03.559320  303063 fix.go:56] duration metric: took 20.196510123s for fixHost
	I0920 19:05:03.559348  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.563599  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.564320  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.564354  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.564605  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.564917  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.565120  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.565379  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.565588  303063 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:03.565813  303063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.230 22 <nil> <nil>}
	I0920 19:05:03.565827  303063 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:03.674803  303063 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859103.651785276
	
	I0920 19:05:03.674833  303063 fix.go:216] guest clock: 1726859103.651785276
	I0920 19:05:03.674840  303063 fix.go:229] Guest: 2024-09-20 19:05:03.651785276 +0000 UTC Remote: 2024-09-20 19:05:03.559326363 +0000 UTC m=+280.560675514 (delta=92.458913ms)
	I0920 19:05:03.674862  303063 fix.go:200] guest clock delta is within tolerance: 92.458913ms
	I0920 19:05:03.674867  303063 start.go:83] releasing machines lock for "default-k8s-diff-port-612312", held for 20.312097182s
	I0920 19:05:03.674897  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.675183  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:03.677975  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.678374  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.678406  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.678552  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679080  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679255  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:03.679380  303063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:03.679429  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.679442  303063 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:03.679472  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:03.682443  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.682733  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.682876  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.682902  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.683014  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.683081  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:03.683104  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:03.683222  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.683326  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:03.683440  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.683512  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:03.683634  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:03.683721  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.683753  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:03.766786  303063 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:03.806684  303063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:03.950032  303063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:03.957153  303063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:03.957230  303063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:03.976784  303063 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:03.976814  303063 start.go:495] detecting cgroup driver to use...
	I0920 19:05:03.976902  303063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:03.994391  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:04.009961  303063 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:04.010021  303063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:04.023827  303063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:04.038585  303063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:04.157489  303063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:04.320396  303063 docker.go:233] disabling docker service ...
	I0920 19:05:04.320477  303063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:04.334865  303063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:04.350776  303063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:04.469438  303063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:04.596055  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:04.610548  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:04.629128  303063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:05:04.629192  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.640211  303063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:04.640289  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.650877  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.661863  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.672695  303063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:04.684141  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.696358  303063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.714936  303063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:04.726155  303063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:04.737400  303063 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:04.737460  303063 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:04.752752  303063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:04.767664  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:04.892509  303063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:04.992361  303063 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:04.992465  303063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:04.997119  303063 start.go:563] Will wait 60s for crictl version
	I0920 19:05:04.997215  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:05:05.001132  303063 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:05.050835  303063 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:05.050955  303063 ssh_runner.go:195] Run: crio --version
	I0920 19:05:05.079870  303063 ssh_runner.go:195] Run: crio --version
	I0920 19:05:05.112325  303063 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:05:05.113600  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetIP
	I0920 19:05:05.116591  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:05.117037  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:05.117075  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:05.117334  303063 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:05.122086  303063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:05.135489  303063 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-612312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:05.135682  303063 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:05:05.135776  303063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:05.174026  303063 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:05:05.174090  303063 ssh_runner.go:195] Run: which lz4
	I0920 19:05:05.179003  303063 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:05:05.184119  303063 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:05:05.184168  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 19:05:06.479331  303063 crio.go:462] duration metric: took 1.300388015s to copy over tarball
	I0920 19:05:06.479434  303063 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:05:05.040094  303486 main.go:141] libmachine: (old-k8s-version-425599) Waiting to get IP...
	I0920 19:05:05.041198  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.041615  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.041711  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.041616  304380 retry.go:31] will retry after 264.073086ms: waiting for machine to come up
	I0920 19:05:05.307229  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.307761  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.307784  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.307713  304380 retry.go:31] will retry after 317.541552ms: waiting for machine to come up
	I0920 19:05:05.627262  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:05.627903  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:05.627929  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:05.627797  304380 retry.go:31] will retry after 432.236037ms: waiting for machine to come up
	I0920 19:05:06.062368  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:06.062842  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:06.062873  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:06.062804  304380 retry.go:31] will retry after 525.683807ms: waiting for machine to come up
	I0920 19:05:06.590915  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:06.591405  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:06.591434  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:06.591355  304380 retry.go:31] will retry after 542.00244ms: waiting for machine to come up
	I0920 19:05:07.135388  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:07.135944  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:07.135998  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:07.135908  304380 retry.go:31] will retry after 886.798885ms: waiting for machine to come up
	I0920 19:05:08.024147  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:08.024684  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:08.024713  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:08.024596  304380 retry.go:31] will retry after 826.869965ms: waiting for machine to come up
	I0920 19:05:08.853176  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:08.853793  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:08.853828  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:08.853736  304380 retry.go:31] will retry after 1.007422775s: waiting for machine to come up
	I0920 19:05:05.756992  302869 pod_ready.go:103] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:08.255312  302869 pod_ready.go:103] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:08.656490  303063 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1770136s)
	I0920 19:05:08.656529  303063 crio.go:469] duration metric: took 2.177156837s to extract the tarball
	I0920 19:05:08.656539  303063 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:05:08.693153  303063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:08.733444  303063 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:05:08.733473  303063 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:05:08.733484  303063 kubeadm.go:934] updating node { 192.168.50.230 8444 v1.31.1 crio true true} ...
	I0920 19:05:08.733624  303063 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-612312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:05:08.733710  303063 ssh_runner.go:195] Run: crio config
	I0920 19:05:08.777872  303063 cni.go:84] Creating CNI manager for ""
	I0920 19:05:08.777913  303063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:08.777927  303063 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:05:08.777957  303063 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.230 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-612312 NodeName:default-k8s-diff-port-612312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:05:08.778143  303063 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.230
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-612312"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:05:08.778220  303063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:05:08.788133  303063 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:05:08.788208  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:05:08.797461  303063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0920 19:05:08.814111  303063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:05:08.832188  303063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0920 19:05:08.849801  303063 ssh_runner.go:195] Run: grep 192.168.50.230	control-plane.minikube.internal$ /etc/hosts
	I0920 19:05:08.853809  303063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:08.865685  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:08.985881  303063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:09.002387  303063 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312 for IP: 192.168.50.230
	I0920 19:05:09.002417  303063 certs.go:194] generating shared ca certs ...
	I0920 19:05:09.002441  303063 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:09.002656  303063 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:05:09.002727  303063 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:05:09.002741  303063 certs.go:256] generating profile certs ...
	I0920 19:05:09.002859  303063 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/client.key
	I0920 19:05:09.002940  303063 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.key.637d18af
	I0920 19:05:09.002990  303063 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.key
	I0920 19:05:09.003207  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:05:09.003248  303063 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:05:09.003256  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:05:09.003278  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:05:09.003306  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:05:09.003328  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:05:09.003365  303063 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:09.004030  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:05:09.037203  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:05:09.068858  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:05:09.095082  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:05:09.122167  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 19:05:09.147953  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:05:09.174251  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:05:09.202438  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/default-k8s-diff-port-612312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:05:09.231354  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:05:09.256365  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:05:09.282589  303063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:05:09.308610  303063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:05:09.328798  303063 ssh_runner.go:195] Run: openssl version
	I0920 19:05:09.334685  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:05:09.345947  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.350772  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.350838  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:09.356595  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:05:09.367559  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:05:09.380638  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.385362  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.385429  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:05:09.391299  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:05:09.402065  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:05:09.412841  303063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.417074  303063 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.417138  303063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:05:09.422761  303063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:05:09.433780  303063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:05:09.438734  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:05:09.444888  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:05:09.450715  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:05:09.456993  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:05:09.462716  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:05:09.468847  303063 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:05:09.474680  303063 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-612312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-612312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:05:09.474780  303063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:05:09.474844  303063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:09.513886  303063 cri.go:89] found id: ""
	I0920 19:05:09.514006  303063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:09.524385  303063 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:05:09.524417  303063 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:05:09.524479  303063 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:05:09.534288  303063 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:05:09.535251  303063 kubeconfig.go:125] found "default-k8s-diff-port-612312" server: "https://192.168.50.230:8444"
	I0920 19:05:09.537293  303063 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:05:09.547753  303063 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.230
	I0920 19:05:09.547796  303063 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:05:09.547812  303063 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:05:09.547890  303063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:09.590656  303063 cri.go:89] found id: ""
	I0920 19:05:09.590743  303063 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:05:09.607426  303063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:05:09.617258  303063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:05:09.617280  303063 kubeadm.go:157] found existing configuration files:
	
	I0920 19:05:09.617344  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 19:05:09.626725  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:05:09.626813  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:05:09.636421  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 19:05:09.645711  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:05:09.645780  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:05:09.655351  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 19:05:09.664771  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:05:09.664833  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:05:09.674556  303063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 19:05:09.683677  303063 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:05:09.683821  303063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:05:09.695159  303063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:05:09.704995  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:09.821398  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.642045  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.870266  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:10.935191  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:11.015669  303063 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:05:11.015787  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:11.516670  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:12.016486  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:12.516070  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:13.016012  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:13.031718  303063 api_server.go:72] duration metric: took 2.016048489s to wait for apiserver process to appear ...
	I0920 19:05:13.031752  303063 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:05:13.031781  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:13.032414  303063 api_server.go:269] stopped: https://192.168.50.230:8444/healthz: Get "https://192.168.50.230:8444/healthz": dial tcp 192.168.50.230:8444: connect: connection refused
	I0920 19:05:09.863227  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:09.863693  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:09.863721  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:09.863640  304380 retry.go:31] will retry after 1.556199895s: waiting for machine to come up
	I0920 19:05:11.422510  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:11.423244  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:11.423271  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:11.423179  304380 retry.go:31] will retry after 1.670177778s: waiting for machine to come up
	I0920 19:05:13.095982  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:13.096600  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:13.096626  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:13.096545  304380 retry.go:31] will retry after 2.71780554s: waiting for machine to come up
	I0920 19:05:10.256325  302869 pod_ready.go:93] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.256352  302869 pod_ready.go:82] duration metric: took 6.510267221s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.256361  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.263229  302869 pod_ready.go:93] pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.263254  302869 pod_ready.go:82] duration metric: took 6.886052ms for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.263264  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xs4nd" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.270014  302869 pod_ready.go:93] pod "kube-proxy-xs4nd" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.270040  302869 pod_ready.go:82] duration metric: took 6.769102ms for pod "kube-proxy-xs4nd" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.270049  302869 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.277232  302869 pod_ready.go:93] pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:10.277262  302869 pod_ready.go:82] duration metric: took 7.203732ms for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:10.277275  302869 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:12.284083  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:14.284983  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:13.532830  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:15.579530  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:05:15.579567  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:05:15.579584  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:15.596526  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:05:15.596570  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:05:16.032011  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:16.039310  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:05:16.039346  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:05:16.531881  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:16.536703  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:05:16.536736  303063 api_server.go:103] status: https://192.168.50.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:05:17.032322  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:05:17.036979  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 200:
	ok
	I0920 19:05:17.043667  303063 api_server.go:141] control plane version: v1.31.1
	I0920 19:05:17.043701  303063 api_server.go:131] duration metric: took 4.011936277s to wait for apiserver health ...
	I0920 19:05:17.043710  303063 cni.go:84] Creating CNI manager for ""
	I0920 19:05:17.043716  303063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:17.045376  303063 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:05:17.046579  303063 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:05:17.056771  303063 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:05:17.076571  303063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:05:17.085546  303063 system_pods.go:59] 8 kube-system pods found
	I0920 19:05:17.085584  303063 system_pods.go:61] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:05:17.085591  303063 system_pods.go:61] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:05:17.085597  303063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:05:17.085608  303063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:05:17.085615  303063 system_pods.go:61] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:05:17.085624  303063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:05:17.085631  303063 system_pods.go:61] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:05:17.085638  303063 system_pods.go:61] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:05:17.085646  303063 system_pods.go:74] duration metric: took 9.051189ms to wait for pod list to return data ...
	I0920 19:05:17.085657  303063 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:05:17.089161  303063 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:05:17.089190  303063 node_conditions.go:123] node cpu capacity is 2
	I0920 19:05:17.089201  303063 node_conditions.go:105] duration metric: took 3.534622ms to run NodePressure ...
	I0920 19:05:17.089218  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:17.442957  303063 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:05:17.447222  303063 kubeadm.go:739] kubelet initialised
	I0920 19:05:17.447247  303063 kubeadm.go:740] duration metric: took 4.255349ms waiting for restarted kubelet to initialise ...
	I0920 19:05:17.447255  303063 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:17.451839  303063 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.457216  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.457240  303063 pod_ready.go:82] duration metric: took 5.361636ms for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.457250  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.457256  303063 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.462245  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.462273  303063 pod_ready.go:82] duration metric: took 5.009342ms for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.462313  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.462326  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.468060  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.468087  303063 pod_ready.go:82] duration metric: took 5.75409ms for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.468099  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.468105  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.479703  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.479727  303063 pod_ready.go:82] duration metric: took 11.614638ms for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.479739  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.479750  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:17.879555  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-proxy-zp8l5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.879582  303063 pod_ready.go:82] duration metric: took 399.824208ms for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:17.879592  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-proxy-zp8l5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:17.879599  303063 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:18.281551  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.281585  303063 pod_ready.go:82] duration metric: took 401.976884ms for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:18.281601  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.281611  303063 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:18.680674  303063 pod_ready.go:98] node "default-k8s-diff-port-612312" hosting pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.680711  303063 pod_ready.go:82] duration metric: took 399.091849ms for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	E0920 19:05:18.680723  303063 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612312" hosting pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:18.680730  303063 pod_ready.go:39] duration metric: took 1.233465539s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:18.680747  303063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:05:18.692948  303063 ops.go:34] apiserver oom_adj: -16
	I0920 19:05:18.692970  303063 kubeadm.go:597] duration metric: took 9.168545987s to restartPrimaryControlPlane
	I0920 19:05:18.692981  303063 kubeadm.go:394] duration metric: took 9.218309896s to StartCluster
	I0920 19:05:18.692999  303063 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:18.693078  303063 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:05:18.694921  303063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:18.695293  303063 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.230 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:05:18.696157  303063 config.go:182] Loaded profile config "default-k8s-diff-port-612312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:18.696187  303063 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:05:18.696357  303063 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696377  303063 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.696387  303063 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:05:18.696419  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.696449  303063 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696495  303063 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-612312"
	I0920 19:05:18.696506  303063 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-612312"
	I0920 19:05:18.696588  303063 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.696610  303063 addons.go:243] addon metrics-server should already be in state true
	I0920 19:05:18.696709  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.697239  303063 out.go:177] * Verifying Kubernetes components...
	I0920 19:05:18.697334  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697386  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.697409  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697409  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.697442  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.697531  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.698927  303063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:18.713346  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I0920 19:05:18.713346  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35127
	I0920 19:05:18.713967  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.714000  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.714472  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.714491  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.714572  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.714588  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.714961  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.714965  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.715163  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.715842  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.715893  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.717732  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I0920 19:05:18.718289  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.718553  303063 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-612312"
	W0920 19:05:18.718575  303063 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:05:18.718604  303063 host.go:66] Checking if "default-k8s-diff-port-612312" exists ...
	I0920 19:05:18.718827  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.718852  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.718926  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.718956  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.719243  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.719782  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.719826  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.733219  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0920 19:05:18.733789  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.734403  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.734422  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.734463  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41091
	I0920 19:05:18.734905  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.734993  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.735207  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.735363  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.735394  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.735703  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.736264  303063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:18.736321  303063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:18.737489  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.739977  303063 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:05:18.740477  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39547
	I0920 19:05:18.741217  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.741752  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:05:18.741770  303063 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:05:18.741791  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.741854  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.741875  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.742351  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.742547  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.744800  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.746006  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.746416  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.746442  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.746695  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.746961  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.746974  303063 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:15.815519  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:15.816035  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:15.816065  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:15.815974  304380 retry.go:31] will retry after 2.62788631s: waiting for machine to come up
	I0920 19:05:18.446768  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:18.447219  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | unable to find current IP address of domain old-k8s-version-425599 in network mk-old-k8s-version-425599
	I0920 19:05:18.447240  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | I0920 19:05:18.447166  304380 retry.go:31] will retry after 4.025841071s: waiting for machine to come up
	I0920 19:05:16.784503  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:18.785829  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:18.747159  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.747332  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.748881  303063 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:05:18.748901  303063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:05:18.748932  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.752335  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.752787  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.752812  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.753180  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.753340  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.753491  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.753628  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.755106  303063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0920 19:05:18.755543  303063 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:18.756159  303063 main.go:141] libmachine: Using API Version  1
	I0920 19:05:18.756182  303063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:18.756521  303063 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:18.756710  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetState
	I0920 19:05:18.758400  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .DriverName
	I0920 19:05:18.758674  303063 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:05:18.758690  303063 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:05:18.758707  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHHostname
	I0920 19:05:18.762208  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.762748  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:2b:63", ip: ""} in network mk-default-k8s-diff-port-612312: {Iface:virbr2 ExpiryTime:2024-09-20 20:04:54 +0000 UTC Type:0 Mac:52:54:00:fa:2b:63 Iaid: IPaddr:192.168.50.230 Prefix:24 Hostname:default-k8s-diff-port-612312 Clientid:01:52:54:00:fa:2b:63}
	I0920 19:05:18.762776  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | domain default-k8s-diff-port-612312 has defined IP address 192.168.50.230 and MAC address 52:54:00:fa:2b:63 in network mk-default-k8s-diff-port-612312
	I0920 19:05:18.762950  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHPort
	I0920 19:05:18.763235  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHKeyPath
	I0920 19:05:18.763518  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .GetSSHUsername
	I0920 19:05:18.763678  303063 sshutil.go:53] new ssh client: &{IP:192.168.50.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/default-k8s-diff-port-612312/id_rsa Username:docker}
	I0920 19:05:18.900876  303063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:18.919923  303063 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-612312" to be "Ready" ...
	I0920 19:05:18.993779  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:05:18.993814  303063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:05:19.001703  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:05:19.019424  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:05:19.054174  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:05:19.054202  303063 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:05:19.123651  303063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:05:19.123682  303063 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:05:19.186745  303063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:05:19.369866  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.369898  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.370210  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.370229  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:19.370246  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.370270  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.370552  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.370593  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:19.370625  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:19.380105  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:19.380140  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:19.380456  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:19.380472  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.145346  303063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.12587258s)
	I0920 19:05:20.145412  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.145427  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.145769  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:20.145834  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.145846  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.145866  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.145877  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.146126  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.146144  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152067  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.152084  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.152361  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.152379  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152388  303063 main.go:141] libmachine: Making call to close driver server
	I0920 19:05:20.152395  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) Calling .Close
	I0920 19:05:20.152625  303063 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:05:20.152662  303063 main.go:141] libmachine: (default-k8s-diff-port-612312) DBG | Closing plugin on server side
	I0920 19:05:20.152711  303063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:05:20.152729  303063 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-612312"
	I0920 19:05:20.154940  303063 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 19:05:20.156326  303063 addons.go:510] duration metric: took 1.460148296s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 19:05:20.923687  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:22.924271  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:23.791151  302538 start.go:364] duration metric: took 54.811585482s to acquireMachinesLock for "no-preload-037711"
	I0920 19:05:23.791208  302538 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:05:23.791219  302538 fix.go:54] fixHost starting: 
	I0920 19:05:23.791657  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:05:23.791696  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:05:23.809350  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0920 19:05:23.809873  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:05:23.810520  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:05:23.810555  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:05:23.810893  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:05:23.811118  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:23.811286  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:05:23.812885  302538 fix.go:112] recreateIfNeeded on no-preload-037711: state=Stopped err=<nil>
	I0920 19:05:23.812914  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	W0920 19:05:23.813135  302538 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:05:23.815287  302538 out.go:177] * Restarting existing kvm2 VM for "no-preload-037711" ...
	I0920 19:05:22.477850  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.478419  303486 main.go:141] libmachine: (old-k8s-version-425599) Found IP for machine: 192.168.39.53
	I0920 19:05:22.478454  303486 main.go:141] libmachine: (old-k8s-version-425599) Reserving static IP address...
	I0920 19:05:22.478473  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has current primary IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.478983  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "old-k8s-version-425599", mac: "52:54:00:d2:a5:70", ip: "192.168.39.53"} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.479021  303486 main.go:141] libmachine: (old-k8s-version-425599) Reserved static IP address: 192.168.39.53
	I0920 19:05:22.479040  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | skip adding static IP to network mk-old-k8s-version-425599 - found existing host DHCP lease matching {name: "old-k8s-version-425599", mac: "52:54:00:d2:a5:70", ip: "192.168.39.53"}
	I0920 19:05:22.479055  303486 main.go:141] libmachine: (old-k8s-version-425599) Waiting for SSH to be available...
	I0920 19:05:22.479067  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Getting to WaitForSSH function...
	I0920 19:05:22.481118  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.481359  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.481382  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.481556  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH client type: external
	I0920 19:05:22.481570  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa (-rw-------)
	I0920 19:05:22.481600  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:22.481612  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | About to run SSH command:
	I0920 19:05:22.481627  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | exit 0
	I0920 19:05:22.606383  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:22.606783  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetConfigRaw
	I0920 19:05:22.607408  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:22.610155  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.610474  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.610506  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.610784  303486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/config.json ...
	I0920 19:05:22.611075  303486 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:22.611103  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:22.611332  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.613838  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.614250  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.614283  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.614395  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.614609  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.614776  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.614950  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.615136  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.615331  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.615344  303486 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:22.718330  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:22.718363  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.718651  303486 buildroot.go:166] provisioning hostname "old-k8s-version-425599"
	I0920 19:05:22.718697  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.718913  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.722027  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.722334  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.722370  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.722559  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.722738  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.722909  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.723086  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.723261  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.723473  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.723491  303486 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-425599 && echo "old-k8s-version-425599" | sudo tee /etc/hostname
	I0920 19:05:22.841563  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-425599
	
	I0920 19:05:22.841592  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.844327  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.844716  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.844748  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.844970  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:22.845154  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.845306  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:22.845413  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:22.845570  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:22.845793  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:22.845818  303486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-425599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-425599/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-425599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:22.959542  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:22.959572  303486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:22.959615  303486 buildroot.go:174] setting up certificates
	I0920 19:05:22.959625  303486 provision.go:84] configureAuth start
	I0920 19:05:22.959635  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetMachineName
	I0920 19:05:22.959972  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:22.962506  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.962845  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.962883  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.963020  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:22.965352  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.965734  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:22.965755  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:22.965936  303486 provision.go:143] copyHostCerts
	I0920 19:05:22.965999  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:22.966018  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:22.966073  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:22.966165  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:22.966173  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:22.966193  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:22.966250  303486 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:22.966257  303486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:22.966274  303486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:22.966368  303486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-425599 san=[127.0.0.1 192.168.39.53 localhost minikube old-k8s-version-425599]
	I0920 19:05:23.156245  303486 provision.go:177] copyRemoteCerts
	I0920 19:05:23.156322  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:23.156356  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.159694  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.160062  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.160105  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.160283  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.160467  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.160633  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.160755  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.244439  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:23.271796  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 19:05:23.298124  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:23.323466  303486 provision.go:87] duration metric: took 363.82725ms to configureAuth
	I0920 19:05:23.323496  303486 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:23.323711  303486 config.go:182] Loaded profile config "old-k8s-version-425599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:05:23.323805  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.326985  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.327336  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.327363  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.327573  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.327788  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.328003  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.328161  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.328315  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:23.328492  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:23.328506  303486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:23.559721  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:23.559755  303486 machine.go:96] duration metric: took 948.663189ms to provisionDockerMachine
	I0920 19:05:23.559770  303486 start.go:293] postStartSetup for "old-k8s-version-425599" (driver="kvm2")
	I0920 19:05:23.559781  303486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:23.559812  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.560186  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:23.560225  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.563146  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.563462  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.563491  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.563786  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.564018  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.564214  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.564365  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.645013  303486 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:23.649198  303486 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:23.649230  303486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:23.649300  303486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:23.649416  303486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:23.649544  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:23.659351  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:23.683405  303486 start.go:296] duration metric: took 123.617289ms for postStartSetup
	I0920 19:05:23.683466  303486 fix.go:56] duration metric: took 20.008417985s for fixHost
	I0920 19:05:23.683495  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.686540  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.686962  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.686988  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.687209  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.687445  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.687624  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.687803  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.688001  303486 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:23.688188  303486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0920 19:05:23.688206  303486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:23.790992  303486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859123.767729644
	
	I0920 19:05:23.791024  303486 fix.go:216] guest clock: 1726859123.767729644
	I0920 19:05:23.791035  303486 fix.go:229] Guest: 2024-09-20 19:05:23.767729644 +0000 UTC Remote: 2024-09-20 19:05:23.683472425 +0000 UTC m=+234.770765310 (delta=84.257219ms)
	I0920 19:05:23.791061  303486 fix.go:200] guest clock delta is within tolerance: 84.257219ms
	I0920 19:05:23.791068  303486 start.go:83] releasing machines lock for "old-k8s-version-425599", held for 20.116056408s
	I0920 19:05:23.791101  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.791432  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:23.794540  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.795015  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.795048  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.795226  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.795779  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.795992  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .DriverName
	I0920 19:05:23.796129  303486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:23.796180  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.796241  303486 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:23.796265  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHHostname
	I0920 19:05:23.799032  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799374  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.799399  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799418  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.799540  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.799743  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.799874  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:23.799890  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.799906  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:23.800084  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHPort
	I0920 19:05:23.800077  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.800198  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHKeyPath
	I0920 19:05:23.800365  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetSSHUsername
	I0920 19:05:23.800514  303486 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/old-k8s-version-425599/id_rsa Username:docker}
	I0920 19:05:23.924885  303486 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:23.932642  303486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:21.284671  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:23.284813  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:24.083860  303486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:24.090360  303486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:24.090444  303486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:24.112281  303486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:24.112310  303486 start.go:495] detecting cgroup driver to use...
	I0920 19:05:24.112383  303486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:24.136600  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:24.154552  303486 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:24.154631  303486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:24.170600  303486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:24.186071  303486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:24.319752  303486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:24.498299  303486 docker.go:233] disabling docker service ...
	I0920 19:05:24.498385  303486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:24.515762  303486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:24.533482  303486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:24.687481  303486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:24.820191  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:24.835255  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:24.856179  303486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 19:05:24.856253  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.868991  303486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:24.869080  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.884074  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.898732  303486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:24.911016  303486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:24.922757  303486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:24.937719  303486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:24.937828  303486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:24.955496  303486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:24.966347  303486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:25.114758  303486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:25.226807  303486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:25.226984  303486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:25.234576  303486 start.go:563] Will wait 60s for crictl version
	I0920 19:05:25.234664  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:25.238739  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:25.282242  303486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:25.282344  303486 ssh_runner.go:195] Run: crio --version
	I0920 19:05:25.317733  303486 ssh_runner.go:195] Run: crio --version
	I0920 19:05:25.353767  303486 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 19:05:23.816707  302538 main.go:141] libmachine: (no-preload-037711) Calling .Start
	I0920 19:05:23.817003  302538 main.go:141] libmachine: (no-preload-037711) Ensuring networks are active...
	I0920 19:05:23.817953  302538 main.go:141] libmachine: (no-preload-037711) Ensuring network default is active
	I0920 19:05:23.818345  302538 main.go:141] libmachine: (no-preload-037711) Ensuring network mk-no-preload-037711 is active
	I0920 19:05:23.818824  302538 main.go:141] libmachine: (no-preload-037711) Getting domain xml...
	I0920 19:05:23.819705  302538 main.go:141] libmachine: (no-preload-037711) Creating domain...
	I0920 19:05:25.216298  302538 main.go:141] libmachine: (no-preload-037711) Waiting to get IP...
	I0920 19:05:25.217452  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.218073  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.218138  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.218047  304582 retry.go:31] will retry after 256.299732ms: waiting for machine to come up
	I0920 19:05:25.475745  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.476451  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.476485  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.476388  304582 retry.go:31] will retry after 298.732749ms: waiting for machine to come up
	I0920 19:05:25.777093  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:25.777731  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:25.777755  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:25.777701  304582 retry.go:31] will retry after 360.011383ms: waiting for machine to come up
	I0920 19:05:26.139480  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:26.140100  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:26.140132  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:26.140049  304582 retry.go:31] will retry after 593.756705ms: waiting for machine to come up
	I0920 19:05:24.924455  303063 node_ready.go:53] node "default-k8s-diff-port-612312" has status "Ready":"False"
	I0920 19:05:26.425132  303063 node_ready.go:49] node "default-k8s-diff-port-612312" has status "Ready":"True"
	I0920 19:05:26.425165  303063 node_ready.go:38] duration metric: took 7.505210484s for node "default-k8s-diff-port-612312" to be "Ready" ...
	I0920 19:05:26.425181  303063 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:05:26.433394  303063 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:26.440462  303063 pod_ready.go:93] pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:26.440497  303063 pod_ready.go:82] duration metric: took 7.072952ms for pod "coredns-7c65d6cfc9-427x2" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:26.440513  303063 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:25.354959  303486 main.go:141] libmachine: (old-k8s-version-425599) Calling .GetIP
	I0920 19:05:25.358179  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:25.358467  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:a5:70", ip: ""} in network mk-old-k8s-version-425599: {Iface:virbr1 ExpiryTime:2024-09-20 20:05:14 +0000 UTC Type:0 Mac:52:54:00:d2:a5:70 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:old-k8s-version-425599 Clientid:01:52:54:00:d2:a5:70}
	I0920 19:05:25.358495  303486 main.go:141] libmachine: (old-k8s-version-425599) DBG | domain old-k8s-version-425599 has defined IP address 192.168.39.53 and MAC address 52:54:00:d2:a5:70 in network mk-old-k8s-version-425599
	I0920 19:05:25.358739  303486 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:25.362714  303486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:25.375880  303486 kubeadm.go:883] updating cluster {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:25.376024  303486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:05:25.376074  303486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:25.420224  303486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:05:25.420307  303486 ssh_runner.go:195] Run: which lz4
	I0920 19:05:25.424775  303486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:05:25.430102  303486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:05:25.430151  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 19:05:27.014068  303486 crio.go:462] duration metric: took 1.589333502s to copy over tarball
	I0920 19:05:27.014160  303486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:05:25.786282  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:27.788058  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:26.735924  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:26.736558  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:26.736582  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:26.736458  304582 retry.go:31] will retry after 712.118443ms: waiting for machine to come up
	I0920 19:05:27.450059  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:27.450696  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:27.450719  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:27.450592  304582 retry.go:31] will retry after 588.649809ms: waiting for machine to come up
	I0920 19:05:28.041216  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:28.041760  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:28.041791  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:28.041691  304582 retry.go:31] will retry after 869.42079ms: waiting for machine to come up
	I0920 19:05:28.912809  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:28.913240  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:28.913265  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:28.913174  304582 retry.go:31] will retry after 1.410011475s: waiting for machine to come up
	I0920 19:05:30.324367  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:30.324952  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:30.324980  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:30.324875  304582 retry.go:31] will retry after 1.398358739s: waiting for machine to come up
	I0920 19:05:28.454512  303063 pod_ready.go:103] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:31.546557  303063 pod_ready.go:103] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:32.072690  303063 pod_ready.go:93] pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.072719  303063 pod_ready.go:82] duration metric: took 5.632196538s for pod "etcd-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.072734  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.081029  303063 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.081062  303063 pod_ready.go:82] duration metric: took 8.319382ms for pod "kube-apiserver-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.081076  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.087314  303063 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.087338  303063 pod_ready.go:82] duration metric: took 6.253184ms for pod "kube-controller-manager-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.087351  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.093286  303063 pod_ready.go:93] pod "kube-proxy-zp8l5" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.093313  303063 pod_ready.go:82] duration metric: took 5.953425ms for pod "kube-proxy-zp8l5" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.093326  303063 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.098529  303063 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace has status "Ready":"True"
	I0920 19:05:32.098553  303063 pod_ready.go:82] duration metric: took 5.218413ms for pod "kube-scheduler-default-k8s-diff-port-612312" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:32.098565  303063 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	I0920 19:05:30.096727  303486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.082523066s)
	I0920 19:05:30.096778  303486 crio.go:469] duration metric: took 3.082671461s to extract the tarball
	I0920 19:05:30.096789  303486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:05:30.148059  303486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:30.184547  303486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:05:30.184578  303486 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 19:05:30.184672  303486 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:30.184686  303486 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.184711  303486 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.184730  303486 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.184732  303486 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.184686  303486 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 19:05:30.184693  303486 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.184792  303486 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.186558  303486 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:30.186609  303486 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 19:05:30.186607  303486 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.186616  303486 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.186688  303486 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.186698  303486 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.186701  303486 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.186565  303486 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.425283  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 19:05:30.469378  303486 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 19:05:30.469448  303486 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 19:05:30.469514  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.475453  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.493250  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.505003  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.513203  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.514365  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.521729  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.533265  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.580710  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.613984  303486 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 19:05:30.614033  303486 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.614085  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.653094  303486 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 19:05:30.653150  303486 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.653205  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675697  303486 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 19:05:30.675730  303486 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 19:05:30.675752  303486 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.675762  303486 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.675805  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675820  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.675805  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:05:30.709199  303486 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 19:05:30.709261  303486 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.709310  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.720146  303486 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 19:05:30.720198  303486 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.720233  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.720313  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.720241  303486 ssh_runner.go:195] Run: which crictl
	I0920 19:05:30.720374  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.720247  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.737444  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.737487  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 19:05:30.843272  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.843362  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.843366  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:30.860414  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:30.860462  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:30.860430  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:30.954641  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:30.982227  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:05:30.982263  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:05:31.041996  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:05:31.042032  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:05:31.042650  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:05:31.042722  303486 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:05:31.070786  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 19:05:31.120407  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 19:05:31.135751  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 19:05:31.163591  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 19:05:31.164483  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 19:05:31.164587  303486 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 19:05:31.345957  303486 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:31.486337  303486 cache_images.go:92] duration metric: took 1.301737533s to LoadCachedImages
	W0920 19:05:31.486434  303486 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0920 19:05:31.486452  303486 kubeadm.go:934] updating node { 192.168.39.53 8443 v1.20.0 crio true true} ...
	I0920 19:05:31.486576  303486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-425599 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:05:31.486661  303486 ssh_runner.go:195] Run: crio config
	I0920 19:05:31.544181  303486 cni.go:84] Creating CNI manager for ""
	I0920 19:05:31.544215  303486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:05:31.544229  303486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:05:31.544257  303486 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-425599 NodeName:old-k8s-version-425599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 19:05:31.544465  303486 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-425599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:05:31.544556  303486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 19:05:31.559445  303486 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:05:31.559542  303486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:05:31.570446  303486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0920 19:05:31.588741  303486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:05:31.606454  303486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0920 19:05:31.624483  303486 ssh_runner.go:195] Run: grep 192.168.39.53	control-plane.minikube.internal$ /etc/hosts
	I0920 19:05:31.628285  303486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:31.641039  303486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:31.771690  303486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:05:31.789746  303486 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599 for IP: 192.168.39.53
	I0920 19:05:31.789775  303486 certs.go:194] generating shared ca certs ...
	I0920 19:05:31.789806  303486 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:31.790074  303486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:05:31.790150  303486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:05:31.790165  303486 certs.go:256] generating profile certs ...
	I0920 19:05:31.798117  303486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/client.key
	I0920 19:05:31.798270  303486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key.e78cb154
	I0920 19:05:31.798333  303486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key
	I0920 19:05:31.798499  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:05:31.798543  303486 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:05:31.798557  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:05:31.798608  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:05:31.798659  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:05:31.798692  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:05:31.798748  303486 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:31.799624  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:05:31.843298  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:05:31.877299  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:05:31.909777  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:05:31.947787  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 19:05:31.991175  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:05:32.019393  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:05:32.048475  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/old-k8s-version-425599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:05:32.084354  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:05:32.112161  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:05:32.138991  303486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:05:32.167653  303486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:05:32.185485  303486 ssh_runner.go:195] Run: openssl version
	I0920 19:05:32.192030  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:05:32.203761  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.209550  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.209650  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:05:32.216277  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:05:32.228192  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:05:32.239984  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.244782  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.244848  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:05:32.250865  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:05:32.262035  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:05:32.273790  303486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.279335  303486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.279414  303486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:05:32.286501  303486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:05:32.298052  303486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:05:32.303064  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:05:32.309973  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:05:32.316704  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:05:32.323166  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:05:32.330126  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:05:32.336554  303486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:05:32.343303  303486 kubeadm.go:392] StartCluster: {Name:old-k8s-version-425599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-425599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:05:32.343413  303486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:05:32.343473  303486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:32.387562  303486 cri.go:89] found id: ""
	I0920 19:05:32.387653  303486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:32.398143  303486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:05:32.398167  303486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:05:32.398222  303486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:05:32.407776  303486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:05:32.409205  303486 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-425599" does not appear in /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:05:32.410267  303486 kubeconfig.go:62] /home/jenkins/minikube-integration/19679-237658/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-425599" cluster setting kubeconfig missing "old-k8s-version-425599" context setting]
	I0920 19:05:32.411776  303486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:05:32.457074  303486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:05:32.468055  303486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.53
	I0920 19:05:32.468113  303486 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:05:32.468132  303486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:05:32.468211  303486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:05:32.505151  303486 cri.go:89] found id: ""
	I0920 19:05:32.505241  303486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:05:32.521391  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:05:32.531705  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:05:32.531728  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:05:32.531774  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:05:32.541137  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:05:32.541219  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:05:32.550684  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:05:32.560262  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:05:32.560352  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:05:32.569735  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:05:32.579126  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:05:32.579199  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:05:32.589508  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:05:32.600985  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:05:32.601100  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:05:32.611511  303486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:05:32.622346  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:32.755562  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:33.793472  303486 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.037864747s)
	I0920 19:05:33.793513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:30.283826  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:32.285077  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:31.725721  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:31.726171  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:31.726198  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:31.726127  304582 retry.go:31] will retry after 2.32427136s: waiting for machine to come up
	I0920 19:05:34.052412  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:34.053005  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:34.053043  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:34.052923  304582 retry.go:31] will retry after 2.159036217s: waiting for machine to come up
	I0920 19:05:36.215059  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:36.215561  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:36.215585  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:36.215501  304582 retry.go:31] will retry after 3.424610182s: waiting for machine to come up
	I0920 19:05:34.105780  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:36.106491  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:34.021260  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:34.142176  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:05:34.235507  303486 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:05:34.235618  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:34.736586  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:35.236065  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:35.735783  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:36.236406  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:36.736243  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:37.235994  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:37.736168  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:38.236559  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:38.736139  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:34.784743  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:37.282598  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.284890  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.642163  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:39.642600  302538 main.go:141] libmachine: (no-preload-037711) DBG | unable to find current IP address of domain no-preload-037711 in network mk-no-preload-037711
	I0920 19:05:39.642642  302538 main.go:141] libmachine: (no-preload-037711) DBG | I0920 19:05:39.642541  304582 retry.go:31] will retry after 3.073679854s: waiting for machine to come up
	I0920 19:05:38.116192  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:40.605958  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.236010  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:39.735723  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:40.236003  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:40.735741  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.235689  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.736411  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:42.236028  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:42.735814  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:43.236391  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:43.736174  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:41.783707  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:43.784197  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:42.719195  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.719748  302538 main.go:141] libmachine: (no-preload-037711) Found IP for machine: 192.168.61.136
	I0920 19:05:42.719775  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has current primary IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.719780  302538 main.go:141] libmachine: (no-preload-037711) Reserving static IP address...
	I0920 19:05:42.720201  302538 main.go:141] libmachine: (no-preload-037711) Reserved static IP address: 192.168.61.136
	I0920 19:05:42.720220  302538 main.go:141] libmachine: (no-preload-037711) Waiting for SSH to be available...
	I0920 19:05:42.720239  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "no-preload-037711", mac: "52:54:00:b0:ac:14", ip: "192.168.61.136"} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.720268  302538 main.go:141] libmachine: (no-preload-037711) DBG | skip adding static IP to network mk-no-preload-037711 - found existing host DHCP lease matching {name: "no-preload-037711", mac: "52:54:00:b0:ac:14", ip: "192.168.61.136"}
	I0920 19:05:42.720280  302538 main.go:141] libmachine: (no-preload-037711) DBG | Getting to WaitForSSH function...
	I0920 19:05:42.722402  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.722661  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.722686  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.722864  302538 main.go:141] libmachine: (no-preload-037711) DBG | Using SSH client type: external
	I0920 19:05:42.722877  302538 main.go:141] libmachine: (no-preload-037711) DBG | Using SSH private key: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa (-rw-------)
	I0920 19:05:42.722939  302538 main.go:141] libmachine: (no-preload-037711) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:05:42.722962  302538 main.go:141] libmachine: (no-preload-037711) DBG | About to run SSH command:
	I0920 19:05:42.722979  302538 main.go:141] libmachine: (no-preload-037711) DBG | exit 0
	I0920 19:05:42.850057  302538 main.go:141] libmachine: (no-preload-037711) DBG | SSH cmd err, output: <nil>: 
	I0920 19:05:42.850451  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetConfigRaw
	I0920 19:05:42.851176  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:42.853807  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.854268  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.854290  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.854558  302538 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/config.json ...
	I0920 19:05:42.854764  302538 machine.go:93] provisionDockerMachine start ...
	I0920 19:05:42.854782  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:42.854999  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:42.857347  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.857683  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.857712  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.857892  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:42.858073  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.858242  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.858385  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:42.858569  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:42.858755  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:42.858766  302538 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:05:42.962098  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:05:42.962137  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:42.962455  302538 buildroot.go:166] provisioning hostname "no-preload-037711"
	I0920 19:05:42.962488  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:42.962696  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:42.965410  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.965793  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:42.965822  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:42.965954  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:42.966128  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.966285  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:42.966442  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:42.966650  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:42.966822  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:42.966847  302538 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-037711 && echo "no-preload-037711" | sudo tee /etc/hostname
	I0920 19:05:43.089291  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-037711
	
	I0920 19:05:43.089338  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.092213  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.092658  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.092689  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.092828  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.093031  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.093188  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.093305  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.093478  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.093692  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.093719  302538 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-037711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-037711/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-037711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:05:43.210625  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:05:43.210660  302538 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19679-237658/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-237658/.minikube}
	I0920 19:05:43.210720  302538 buildroot.go:174] setting up certificates
	I0920 19:05:43.210740  302538 provision.go:84] configureAuth start
	I0920 19:05:43.210758  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetMachineName
	I0920 19:05:43.211093  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:43.213829  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.214346  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.214379  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.214542  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.216979  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.217294  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.217319  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.217461  302538 provision.go:143] copyHostCerts
	I0920 19:05:43.217526  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem, removing ...
	I0920 19:05:43.217546  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem
	I0920 19:05:43.217610  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/ca.pem (1078 bytes)
	I0920 19:05:43.217708  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem, removing ...
	I0920 19:05:43.217720  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem
	I0920 19:05:43.217750  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/cert.pem (1123 bytes)
	I0920 19:05:43.217885  302538 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem, removing ...
	I0920 19:05:43.217899  302538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem
	I0920 19:05:43.217947  302538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-237658/.minikube/key.pem (1679 bytes)
	I0920 19:05:43.218008  302538 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem org=jenkins.no-preload-037711 san=[127.0.0.1 192.168.61.136 localhost minikube no-preload-037711]
	I0920 19:05:43.395507  302538 provision.go:177] copyRemoteCerts
	I0920 19:05:43.395576  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:05:43.395607  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.398288  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.398663  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.398694  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.398899  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.399087  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.399205  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.399324  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:43.488543  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 19:05:43.514793  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:05:43.537520  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:05:43.561983  302538 provision.go:87] duration metric: took 351.22541ms to configureAuth
	I0920 19:05:43.562021  302538 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:05:43.562213  302538 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:43.562292  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.565776  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.566235  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.566270  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.566486  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.566706  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.566895  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.567043  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.567251  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.567439  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.567454  302538 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:05:43.797110  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:05:43.797142  302538 machine.go:96] duration metric: took 942.364782ms to provisionDockerMachine
	I0920 19:05:43.797157  302538 start.go:293] postStartSetup for "no-preload-037711" (driver="kvm2")
	I0920 19:05:43.797171  302538 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:05:43.797193  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:43.797516  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:05:43.797546  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.800148  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.800532  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.800559  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.800794  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.800993  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.801158  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.801255  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:43.885788  302538 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:05:43.890070  302538 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:05:43.890108  302538 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/addons for local assets ...
	I0920 19:05:43.890198  302538 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-237658/.minikube/files for local assets ...
	I0920 19:05:43.890293  302538 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem -> 2448492.pem in /etc/ssl/certs
	I0920 19:05:43.890405  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:05:43.899679  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:05:43.924928  302538 start.go:296] duration metric: took 127.752462ms for postStartSetup
	I0920 19:05:43.924973  302538 fix.go:56] duration metric: took 20.133755115s for fixHost
	I0920 19:05:43.924996  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:43.927678  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.928059  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:43.928099  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:43.928277  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:43.928517  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.928685  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:43.928815  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:43.928979  302538 main.go:141] libmachine: Using SSH client type: native
	I0920 19:05:43.929190  302538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0920 19:05:43.929204  302538 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:05:44.042745  302538 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726859144.016675004
	
	I0920 19:05:44.042769  302538 fix.go:216] guest clock: 1726859144.016675004
	I0920 19:05:44.042776  302538 fix.go:229] Guest: 2024-09-20 19:05:44.016675004 +0000 UTC Remote: 2024-09-20 19:05:43.924977449 +0000 UTC m=+357.534412233 (delta=91.697555ms)
	I0920 19:05:44.042804  302538 fix.go:200] guest clock delta is within tolerance: 91.697555ms
	I0920 19:05:44.042819  302538 start.go:83] releasing machines lock for "no-preload-037711", held for 20.251627041s
	I0920 19:05:44.042842  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.043134  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:44.046071  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.046412  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.046440  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.046613  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047113  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047278  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:05:44.047366  302538 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:05:44.047428  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:44.047520  302538 ssh_runner.go:195] Run: cat /version.json
	I0920 19:05:44.047548  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:05:44.050275  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050358  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050849  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.050872  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:44.050892  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.050915  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:44.051095  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:44.051259  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:44.051259  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:05:44.051496  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:44.051637  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:05:44.051655  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:44.051789  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:05:44.051953  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:05:44.134420  302538 ssh_runner.go:195] Run: systemctl --version
	I0920 19:05:44.175303  302538 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:05:44.319129  302538 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:05:44.325894  302538 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:05:44.325975  302538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:05:44.341779  302538 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:05:44.341809  302538 start.go:495] detecting cgroup driver to use...
	I0920 19:05:44.341899  302538 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:05:44.358211  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:05:44.373240  302538 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:05:44.373327  302538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:05:44.387429  302538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:05:44.401684  302538 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:05:44.521292  302538 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:05:44.668050  302538 docker.go:233] disabling docker service ...
	I0920 19:05:44.668124  302538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:05:44.683196  302538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:05:44.696604  302538 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:05:44.843581  302538 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:05:44.959377  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:05:44.973472  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:05:44.991282  302538 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:05:44.991344  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.001696  302538 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:05:45.001776  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.012684  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.023288  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.034330  302538 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:05:45.045773  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.056332  302538 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.074730  302538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:05:45.085656  302538 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:05:45.096371  302538 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:05:45.096447  302538 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:05:45.112094  302538 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:05:45.123050  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:05:45.236136  302538 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:05:45.325978  302538 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:05:45.326065  302538 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:05:45.330452  302538 start.go:563] Will wait 60s for crictl version
	I0920 19:05:45.330527  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.334010  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:05:45.373622  302538 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:05:45.373736  302538 ssh_runner.go:195] Run: crio --version
	I0920 19:05:45.401279  302538 ssh_runner.go:195] Run: crio --version
	I0920 19:05:45.430445  302538 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:05:45.431717  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetIP
	I0920 19:05:45.434768  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:45.435094  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:05:45.435121  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:05:45.435335  302538 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 19:05:45.439275  302538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:05:45.451300  302538 kubeadm.go:883] updating cluster {Name:no-preload-037711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:05:45.451461  302538 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:05:45.451502  302538 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:05:45.485045  302538 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 19:05:45.485073  302538 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 19:05:45.485130  302538 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:45.485150  302538 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.485168  302538 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.485182  302538 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.485231  302538 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.485171  302538 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.485305  302538 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 19:05:45.485450  302538 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.486694  302538 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.486700  302538 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.486808  302538 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.486808  302538 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 19:05:45.486829  302538 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.486894  302538 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:45.486829  302538 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.487055  302538 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.708911  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 19:05:45.773014  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.815176  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.818274  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.818298  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.829644  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.850791  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.862553  302538 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 19:05:45.862616  302538 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.862680  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.907516  302538 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 19:05:45.907573  302538 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.907629  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.938640  302538 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 19:05:45.938715  302538 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.938755  302538 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 19:05:45.938799  302538 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.938845  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.938770  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.947658  302538 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 19:05:45.947706  302538 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:45.947757  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.965105  302538 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 19:05:45.965161  302538 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:45.965166  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:45.965191  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:45.965248  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:45.965282  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:45.965344  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:45.965350  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.044513  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.044640  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:46.077894  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:46.080113  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:46.080170  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:46.080239  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.155137  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.155188  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 19:05:46.208431  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 19:05:46.208477  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 19:05:46.208521  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 19:05:46.208565  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 19:05:46.290657  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 19:05:46.290694  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 19:05:46.290794  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.325206  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 19:05:46.325353  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:46.353181  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 19:05:46.353289  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 19:05:46.353307  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 19:05:46.353312  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:46.353331  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 19:05:46.353383  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:46.353418  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:46.353384  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.353512  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 19:05:46.379873  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 19:05:46.379934  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 19:05:46.379979  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 19:05:46.380024  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 19:05:46.379981  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 19:05:46.380134  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:43.105005  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:45.105781  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:47.604822  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:44.235886  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:44.736349  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.235783  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.736619  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:46.236082  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:46.736609  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:47.236078  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:47.736130  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:48.236218  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:48.735858  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:45.784555  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:47.785125  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:46.622278  302538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:48.339532  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.985991382s)
	I0920 19:05:48.339568  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 19:05:48.339594  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:48.339653  302538 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.959488823s)
	I0920 19:05:48.339685  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 19:05:48.339665  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 19:05:48.339742  302538 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.717432253s)
	I0920 19:05:48.339787  302538 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 19:05:48.339815  302538 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:48.339842  302538 ssh_runner.go:195] Run: which crictl
	I0920 19:05:48.343725  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:50.823508  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.483779728s)
	I0920 19:05:50.823559  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.479795238s)
	I0920 19:05:50.823593  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 19:05:50.823637  302538 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:50.823649  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:50.823692  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 19:05:49.607326  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:51.609055  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:49.236645  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:49.736183  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.236642  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.735713  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:51.235862  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:51.736479  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:52.235726  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:52.735939  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:53.235759  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:53.736290  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:50.284090  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:52.284996  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:54.127303  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.303601736s)
	I0920 19:05:54.127415  302538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:05:54.127327  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.303608969s)
	I0920 19:05:54.127455  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 19:05:54.127488  302538 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:54.127530  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 19:05:56.202021  302538 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.074563861s)
	I0920 19:05:56.202050  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.074501802s)
	I0920 19:05:56.202076  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 19:05:56.202095  302538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 19:05:56.202118  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:56.202184  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 19:05:56.202202  302538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:05:56.207141  302538 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 19:05:54.104909  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:56.105373  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:54.235840  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:54.735817  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:55.235812  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:55.736410  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:56.236203  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:56.735713  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:57.235777  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:57.735835  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:58.236448  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:58.736010  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:54.783661  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:56.784770  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:58.785122  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:58.166303  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.964088667s)
	I0920 19:05:58.166340  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 19:05:58.166369  302538 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:58.166424  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 19:05:59.625258  302538 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.458808535s)
	I0920 19:05:59.625294  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 19:05:59.625318  302538 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:05:59.625361  302538 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 19:06:00.572722  302538 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19679-237658/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 19:06:00.572768  302538 cache_images.go:123] Successfully loaded all cached images
	I0920 19:06:00.572774  302538 cache_images.go:92] duration metric: took 15.087689513s to LoadCachedImages
	I0920 19:06:00.572788  302538 kubeadm.go:934] updating node { 192.168.61.136 8443 v1.31.1 crio true true} ...
	I0920 19:06:00.572917  302538 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-037711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:06:00.572994  302538 ssh_runner.go:195] Run: crio config
	I0920 19:06:00.619832  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:06:00.619861  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:06:00.619875  302538 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:06:00.619910  302538 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-037711 NodeName:no-preload-037711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:06:00.620110  302538 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-037711"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:06:00.620181  302538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:06:00.630434  302538 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:06:00.630513  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:06:00.639447  302538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 19:06:00.656195  302538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:06:00.675718  302538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0920 19:06:00.709191  302538 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0920 19:06:00.713271  302538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:06:00.726826  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:06:00.850927  302538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:06:00.869014  302538 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711 for IP: 192.168.61.136
	I0920 19:06:00.869044  302538 certs.go:194] generating shared ca certs ...
	I0920 19:06:00.869109  302538 certs.go:226] acquiring lock for ca certs: {Name:mk70b63defa4107ec1dc68841b10b0b5e2cd1033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:06:00.869331  302538 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key
	I0920 19:06:00.869393  302538 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key
	I0920 19:06:00.869405  302538 certs.go:256] generating profile certs ...
	I0920 19:06:00.869507  302538 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/client.key
	I0920 19:06:00.869589  302538 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.key.b5da98fb
	I0920 19:06:00.869654  302538 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.key
	I0920 19:06:00.869831  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem (1338 bytes)
	W0920 19:06:00.869877  302538 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849_empty.pem, impossibly tiny 0 bytes
	I0920 19:06:00.869890  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:06:00.869947  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:06:00.869981  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:06:00.870010  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/certs/key.pem (1679 bytes)
	I0920 19:06:00.870068  302538 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem (1708 bytes)
	I0920 19:06:00.870802  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:06:00.922699  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:06:00.953401  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:06:00.996889  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:06:01.024682  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 19:06:01.050412  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:06:01.081212  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:06:01.108337  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:06:01.133628  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/ssl/certs/2448492.pem --> /usr/share/ca-certificates/2448492.pem (1708 bytes)
	I0920 19:06:01.158805  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:06:01.186888  302538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-237658/.minikube/certs/244849.pem --> /usr/share/ca-certificates/244849.pem (1338 bytes)
	I0920 19:06:01.211771  302538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:06:01.229448  302538 ssh_runner.go:195] Run: openssl version
	I0920 19:06:01.235289  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:06:01.246775  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.251410  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.251472  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:01.257271  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:06:01.268229  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244849.pem && ln -fs /usr/share/ca-certificates/244849.pem /etc/ssl/certs/244849.pem"
	I0920 19:06:01.280431  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.285643  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:55 /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.285736  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244849.pem
	I0920 19:06:01.291772  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244849.pem /etc/ssl/certs/51391683.0"
	I0920 19:06:01.302858  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448492.pem && ln -fs /usr/share/ca-certificates/2448492.pem /etc/ssl/certs/2448492.pem"
	I0920 19:06:01.314034  302538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.319160  302538 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:55 /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.319235  302538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448492.pem
	I0920 19:06:01.325450  302538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448492.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:06:01.336803  302538 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:06:01.341439  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:06:01.347592  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:06:01.354109  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:06:01.360513  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:06:01.366749  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:06:01.372898  302538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:06:01.379101  302538 kubeadm.go:392] StartCluster: {Name:no-preload-037711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-037711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:06:01.379228  302538 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:06:01.379280  302538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:06:01.416896  302538 cri.go:89] found id: ""
	I0920 19:06:01.416972  302538 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:05:58.606203  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:00.606802  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:59.236283  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:05:59.736440  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:00.236142  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:00.735772  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.236360  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.736388  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:02.236462  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:02.736742  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.236457  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.736705  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:01.284596  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:03.784495  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:01.428611  302538 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:06:01.428636  302538 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:06:01.428685  302538 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:06:01.439392  302538 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:06:01.440512  302538 kubeconfig.go:125] found "no-preload-037711" server: "https://192.168.61.136:8443"
	I0920 19:06:01.442938  302538 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:06:01.452938  302538 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.136
	I0920 19:06:01.452982  302538 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:06:01.452999  302538 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:06:01.453062  302538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:06:01.487878  302538 cri.go:89] found id: ""
	I0920 19:06:01.487967  302538 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:06:01.506032  302538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:06:01.516536  302538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:06:01.516562  302538 kubeadm.go:157] found existing configuration files:
	
	I0920 19:06:01.516609  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:06:01.526718  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:06:01.526790  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:06:01.536809  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:06:01.546172  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:06:01.546243  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:06:01.556211  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:06:01.565796  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:06:01.565869  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:06:01.577089  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:06:01.587862  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:06:01.587985  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:06:01.598666  302538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:06:01.610018  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:01.740046  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.566817  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.784258  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.848752  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:02.933469  302538 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:06:02.933579  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.434385  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.933975  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:03.962422  302538 api_server.go:72] duration metric: took 1.028951755s to wait for apiserver process to appear ...
	I0920 19:06:03.962453  302538 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:06:03.962485  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:03.963119  302538 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": dial tcp 192.168.61.136:8443: connect: connection refused
	I0920 19:06:04.462843  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.443140  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:06:06.443178  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:06:06.443196  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.485554  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:06:06.485597  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:06:06.485614  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.566023  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:06.566068  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:06.963116  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:06.972764  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:06.972804  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:07.463432  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:07.470963  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:06:07.471000  302538 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:06:07.962553  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:06:07.967724  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0920 19:06:07.975215  302538 api_server.go:141] control plane version: v1.31.1
	I0920 19:06:07.975248  302538 api_server.go:131] duration metric: took 4.01278814s to wait for apiserver health ...
	I0920 19:06:07.975258  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:06:07.975267  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:06:07.977455  302538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:06:03.106079  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:05.609475  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:04.236005  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:04.735854  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:05.236716  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:05.736668  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.235839  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.736412  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:07.236224  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:07.735830  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:08.235800  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:08.736645  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:06.284930  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:08.784854  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:07.979099  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:06:07.991210  302538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:06:08.016110  302538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:06:08.031124  302538 system_pods.go:59] 8 kube-system pods found
	I0920 19:06:08.031177  302538 system_pods.go:61] "coredns-7c65d6cfc9-8gmsq" [91d89ad2-f899-464c-b351-a0773c16223b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:06:08.031191  302538 system_pods.go:61] "etcd-no-preload-037711" [5b353ad3-0389-4e3d-b5c3-2f2bc65db200] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 19:06:08.031203  302538 system_pods.go:61] "kube-apiserver-no-preload-037711" [b19002c7-f891-4bc1-a2f0-0f6beebb3987] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:06:08.031247  302538 system_pods.go:61] "kube-controller-manager-no-preload-037711" [a5b1951d-7189-4ee3-bc28-bed058048ebb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:06:08.031262  302538 system_pods.go:61] "kube-proxy-zzmkv" [c8f4695b-eefd-407a-9b7c-d5078632d120] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:06:08.031270  302538 system_pods.go:61] "kube-scheduler-no-preload-037711" [b44824ba-52ad-4e86-9408-118f0e1852d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 19:06:08.031280  302538 system_pods.go:61] "metrics-server-6867b74b74-7xpgm" [f6280d56-5be4-475f-91da-2862e992868f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:06:08.031290  302538 system_pods.go:61] "storage-provisioner" [d1efb64f-d2a9-4bb4-9bc3-c643c415fcf2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:06:08.031300  302538 system_pods.go:74] duration metric: took 15.160935ms to wait for pod list to return data ...
	I0920 19:06:08.031310  302538 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:06:08.035903  302538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:06:08.035953  302538 node_conditions.go:123] node cpu capacity is 2
	I0920 19:06:08.035968  302538 node_conditions.go:105] duration metric: took 4.652846ms to run NodePressure ...
	I0920 19:06:08.035995  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:06:08.404721  302538 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:06:08.409400  302538 kubeadm.go:739] kubelet initialised
	I0920 19:06:08.409423  302538 kubeadm.go:740] duration metric: took 4.670172ms waiting for restarted kubelet to initialise ...
	I0920 19:06:08.409432  302538 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:06:08.416547  302538 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:10.426817  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:08.107050  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:10.606744  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:09.236127  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:09.735809  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.236585  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.735863  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:11.236700  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:11.736557  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:12.236483  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:12.735695  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:13.235905  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:13.736128  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:10.785471  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:13.284642  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:12.923811  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.423162  302538 pod_ready.go:103] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.926280  302538 pod_ready.go:93] pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:15.926318  302538 pod_ready.go:82] duration metric: took 7.509740963s for pod "coredns-7c65d6cfc9-8gmsq" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.926332  302538 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.932683  302538 pod_ready.go:93] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:15.932713  302538 pod_ready.go:82] duration metric: took 6.372388ms for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:15.932725  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:13.111190  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:15.606371  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:14.236234  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:14.736677  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.236499  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.735667  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:16.235774  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:16.735833  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:17.236149  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:17.735782  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:18.236400  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:18.736460  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:15.784441  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:18.284748  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:17.938853  302538 pod_ready.go:103] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:19.939569  302538 pod_ready.go:103] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:18.104867  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:20.105870  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:22.605773  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:19.236298  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:19.736672  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.236401  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.735810  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:21.235673  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:21.736112  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:22.235998  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:22.736179  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:23.236680  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:23.736388  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:20.783320  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:22.783590  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:21.939753  302538 pod_ready.go:93] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:21.939781  302538 pod_ready.go:82] duration metric: took 6.007035191s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:21.939794  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.446396  302538 pod_ready.go:93] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.446425  302538 pod_ready.go:82] duration metric: took 506.622064ms for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.446435  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zzmkv" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.452105  302538 pod_ready.go:93] pod "kube-proxy-zzmkv" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.452130  302538 pod_ready.go:82] duration metric: took 5.688419ms for pod "kube-proxy-zzmkv" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.452139  302538 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.456181  302538 pod_ready.go:93] pod "kube-scheduler-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:06:22.456205  302538 pod_ready.go:82] duration metric: took 4.05917ms for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:22.456215  302538 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" ...
	I0920 19:06:24.463262  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:24.606021  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:27.105497  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:24.236369  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:24.736082  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:25.236694  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:25.736346  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.236075  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.736666  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:27.236418  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:27.736656  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:28.235972  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:28.735743  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:24.783673  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:26.783960  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.283970  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:26.962413  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.462423  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.606628  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:32.105603  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:29.236688  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:29.736132  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:30.236404  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:30.735733  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.236364  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.736031  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:32.236457  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:32.735751  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:33.236371  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:33.736474  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:31.284572  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:33.286630  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:31.464686  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:33.962309  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:35.963445  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:34.105897  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:36.605140  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:34.236387  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:34.236472  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:34.276702  303486 cri.go:89] found id: ""
	I0920 19:06:34.276735  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.276747  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:34.276758  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:34.276815  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:34.312886  303486 cri.go:89] found id: ""
	I0920 19:06:34.312923  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.312935  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:34.312950  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:34.313024  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:34.347199  303486 cri.go:89] found id: ""
	I0920 19:06:34.347240  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.347250  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:34.347258  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:34.347332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:34.383077  303486 cri.go:89] found id: ""
	I0920 19:06:34.383110  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.383121  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:34.383130  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:34.383202  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:34.421184  303486 cri.go:89] found id: ""
	I0920 19:06:34.421212  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.421222  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:34.421231  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:34.421304  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:34.459964  303486 cri.go:89] found id: ""
	I0920 19:06:34.459998  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.460009  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:34.460018  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:34.460085  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:34.493761  303486 cri.go:89] found id: ""
	I0920 19:06:34.493803  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.493815  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:34.493824  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:34.493894  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:34.534406  303486 cri.go:89] found id: ""
	I0920 19:06:34.534445  303486 logs.go:276] 0 containers: []
	W0920 19:06:34.534457  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:34.534471  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:34.534496  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:34.607256  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:34.607297  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:34.644923  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:34.644953  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:34.693574  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:34.693622  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:34.707703  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:34.707742  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:34.846809  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:37.347895  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:37.377651  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:37.377728  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:37.430034  303486 cri.go:89] found id: ""
	I0920 19:06:37.430071  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.430079  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:37.430087  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:37.430156  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:37.467026  303486 cri.go:89] found id: ""
	I0920 19:06:37.467055  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.467063  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:37.467069  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:37.467120  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:37.505791  303486 cri.go:89] found id: ""
	I0920 19:06:37.505824  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.505835  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:37.505845  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:37.505943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:37.541519  303486 cri.go:89] found id: ""
	I0920 19:06:37.541556  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.541568  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:37.541577  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:37.541633  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:37.576088  303486 cri.go:89] found id: ""
	I0920 19:06:37.576126  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.576137  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:37.576146  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:37.576204  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:37.613039  303486 cri.go:89] found id: ""
	I0920 19:06:37.613074  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.613084  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:37.613091  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:37.613153  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:37.656440  303486 cri.go:89] found id: ""
	I0920 19:06:37.656473  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.656482  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:37.656489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:37.656555  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:37.693247  303486 cri.go:89] found id: ""
	I0920 19:06:37.693283  303486 logs.go:276] 0 containers: []
	W0920 19:06:37.693292  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:37.693302  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:37.693321  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:37.769230  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:37.769280  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:37.811016  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:37.811058  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:37.865729  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:37.865773  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:37.880056  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:37.880094  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:37.956402  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:35.783789  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:37.787063  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:38.461824  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.465028  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:38.605494  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.605606  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:40.457303  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:40.473769  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:40.473848  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:40.511320  303486 cri.go:89] found id: ""
	I0920 19:06:40.511354  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.511363  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:40.511371  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:40.511433  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:40.547086  303486 cri.go:89] found id: ""
	I0920 19:06:40.547127  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.547138  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:40.547147  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:40.547216  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:40.580969  303486 cri.go:89] found id: ""
	I0920 19:06:40.581010  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.581022  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:40.581035  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:40.581098  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:40.615802  303486 cri.go:89] found id: ""
	I0920 19:06:40.615842  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.615851  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:40.615858  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:40.615931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:40.649398  303486 cri.go:89] found id: ""
	I0920 19:06:40.649444  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.649459  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:40.649467  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:40.649541  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:40.683124  303486 cri.go:89] found id: ""
	I0920 19:06:40.683160  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.683172  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:40.683181  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:40.683251  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:40.718005  303486 cri.go:89] found id: ""
	I0920 19:06:40.718032  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.718040  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:40.718047  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:40.718107  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:40.751965  303486 cri.go:89] found id: ""
	I0920 19:06:40.751992  303486 logs.go:276] 0 containers: []
	W0920 19:06:40.752000  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:40.752010  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:40.752024  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:40.765195  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:40.765234  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:40.842287  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:40.842321  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:40.842338  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:40.928384  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:40.928430  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:40.970207  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:40.970242  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:43.526435  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:43.540582  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:43.540680  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:43.576798  303486 cri.go:89] found id: ""
	I0920 19:06:43.576837  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.576846  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:43.576852  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:43.576916  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:43.615261  303486 cri.go:89] found id: ""
	I0920 19:06:43.615290  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.615298  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:43.615305  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:43.615359  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:43.651214  303486 cri.go:89] found id: ""
	I0920 19:06:43.651251  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.651264  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:43.651277  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:43.651338  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:43.684483  303486 cri.go:89] found id: ""
	I0920 19:06:43.684523  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.684535  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:43.684544  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:43.684614  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:43.720996  303486 cri.go:89] found id: ""
	I0920 19:06:43.721026  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.721035  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:43.721041  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:43.721107  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:43.764445  303486 cri.go:89] found id: ""
	I0920 19:06:43.764482  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.764493  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:43.764501  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:43.764564  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:43.808848  303486 cri.go:89] found id: ""
	I0920 19:06:43.808878  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.808888  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:43.808897  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:43.808968  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:43.845462  303486 cri.go:89] found id: ""
	I0920 19:06:43.845491  303486 logs.go:276] 0 containers: []
	W0920 19:06:43.845500  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:43.845511  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:43.845525  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:43.896550  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:43.896596  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:43.909243  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:43.909272  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:06:40.284735  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:42.783363  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:42.962289  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:44.963071  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:43.106353  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:45.606296  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	W0920 19:06:43.987455  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:43.987474  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:43.987491  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:44.063585  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:44.063629  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:46.602859  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:46.617286  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:46.617357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:46.653643  303486 cri.go:89] found id: ""
	I0920 19:06:46.653681  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.653693  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:46.653702  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:46.653778  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:46.691169  303486 cri.go:89] found id: ""
	I0920 19:06:46.691198  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.691206  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:46.691213  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:46.691271  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:46.725498  303486 cri.go:89] found id: ""
	I0920 19:06:46.725527  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.725538  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:46.725545  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:46.725614  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:46.758850  303486 cri.go:89] found id: ""
	I0920 19:06:46.758876  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.758884  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:46.758891  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:46.758942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:46.793648  303486 cri.go:89] found id: ""
	I0920 19:06:46.793683  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.793692  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:46.793699  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:46.793755  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:46.832908  303486 cri.go:89] found id: ""
	I0920 19:06:46.832940  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.832947  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:46.832953  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:46.833019  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:46.866450  303486 cri.go:89] found id: ""
	I0920 19:06:46.866502  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.866513  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:46.866522  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:46.866593  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:46.901966  303486 cri.go:89] found id: ""
	I0920 19:06:46.902001  303486 logs.go:276] 0 containers: []
	W0920 19:06:46.902013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:46.902026  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:46.902041  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:46.948901  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:46.948946  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:46.963489  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:46.963534  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:47.041701  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:47.041722  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:47.041736  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:47.124320  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:47.124364  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:44.783818  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:46.784000  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:48.785175  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:46.963700  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:49.462018  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:48.104361  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:50.105520  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:52.605799  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:49.664255  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:49.677240  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:49.677322  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:49.712375  303486 cri.go:89] found id: ""
	I0920 19:06:49.712401  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.712409  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:49.712415  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:49.712476  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:49.747682  303486 cri.go:89] found id: ""
	I0920 19:06:49.747713  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.747721  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:49.747727  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:49.747783  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:49.782276  303486 cri.go:89] found id: ""
	I0920 19:06:49.782319  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.782329  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:49.782337  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:49.782400  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:49.822625  303486 cri.go:89] found id: ""
	I0920 19:06:49.822661  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.822672  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:49.822680  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:49.822751  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:49.862159  303486 cri.go:89] found id: ""
	I0920 19:06:49.862192  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.862202  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:49.862212  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:49.862281  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:49.897552  303486 cri.go:89] found id: ""
	I0920 19:06:49.897587  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.897595  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:49.897608  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:49.897667  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:49.931667  303486 cri.go:89] found id: ""
	I0920 19:06:49.931698  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.931709  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:49.931718  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:49.931774  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:49.969206  303486 cri.go:89] found id: ""
	I0920 19:06:49.969236  303486 logs.go:276] 0 containers: []
	W0920 19:06:49.969244  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:49.969254  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:49.969266  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:50.019287  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:50.019328  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:50.033080  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:50.033113  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:50.106415  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:50.106442  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:50.106459  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:50.183710  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:50.183762  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:52.725443  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:52.739293  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:52.739386  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:52.772412  303486 cri.go:89] found id: ""
	I0920 19:06:52.772445  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.772454  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:52.772461  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:52.772528  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:52.811153  303486 cri.go:89] found id: ""
	I0920 19:06:52.811189  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.811197  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:52.811204  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:52.811260  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:52.848709  303486 cri.go:89] found id: ""
	I0920 19:06:52.848740  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.848749  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:52.848755  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:52.848811  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:52.883358  303486 cri.go:89] found id: ""
	I0920 19:06:52.883387  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.883394  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:52.883400  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:52.883455  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:52.917838  303486 cri.go:89] found id: ""
	I0920 19:06:52.917874  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.917893  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:52.917912  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:52.917982  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:52.952340  303486 cri.go:89] found id: ""
	I0920 19:06:52.952378  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.952387  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:52.952396  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:52.952471  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:52.986433  303486 cri.go:89] found id: ""
	I0920 19:06:52.986469  303486 logs.go:276] 0 containers: []
	W0920 19:06:52.986478  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:52.986486  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:52.986582  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:53.024209  303486 cri.go:89] found id: ""
	I0920 19:06:53.024241  303486 logs.go:276] 0 containers: []
	W0920 19:06:53.024249  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:53.024260  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:53.024272  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:53.075336  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:53.075374  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:53.090761  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:53.090802  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:53.167883  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:53.167915  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:53.167933  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:53.242003  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:53.242044  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:50.785624  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:53.284212  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:51.462197  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:53.962545  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:55.962875  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:54.607806  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:57.105146  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:55.779107  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:55.793713  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:55.793802  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:55.829411  303486 cri.go:89] found id: ""
	I0920 19:06:55.829441  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.829450  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:55.829456  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:55.829513  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:55.864578  303486 cri.go:89] found id: ""
	I0920 19:06:55.864606  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.864617  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:55.864625  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:55.864686  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:55.897004  303486 cri.go:89] found id: ""
	I0920 19:06:55.897033  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.897041  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:55.897048  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:55.897106  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:55.931019  303486 cri.go:89] found id: ""
	I0920 19:06:55.931055  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.931066  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:55.931076  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:55.931141  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:55.966595  303486 cri.go:89] found id: ""
	I0920 19:06:55.966625  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.966635  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:55.966643  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:55.966693  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:55.999707  303486 cri.go:89] found id: ""
	I0920 19:06:55.999736  303486 logs.go:276] 0 containers: []
	W0920 19:06:55.999747  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:55.999756  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:55.999825  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:56.034323  303486 cri.go:89] found id: ""
	I0920 19:06:56.034361  303486 logs.go:276] 0 containers: []
	W0920 19:06:56.034371  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:56.034377  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:56.034433  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:56.069019  303486 cri.go:89] found id: ""
	I0920 19:06:56.069048  303486 logs.go:276] 0 containers: []
	W0920 19:06:56.069056  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:56.069066  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:56.069077  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:06:56.122820  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:56.122860  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:56.136924  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:56.136966  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:56.216255  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:56.216284  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:56.216299  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:56.293461  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:56.293506  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:58.831252  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:58.844410  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:58.844474  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:58.877508  303486 cri.go:89] found id: ""
	I0920 19:06:58.877539  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.877547  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:06:58.877555  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:06:58.877613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:58.911284  303486 cri.go:89] found id: ""
	I0920 19:06:58.911315  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.911323  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:06:58.911329  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:06:58.911382  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:58.944646  303486 cri.go:89] found id: ""
	I0920 19:06:58.944675  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.944682  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:06:58.944688  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:58.944739  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:55.784379  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.283450  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.461839  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:00.461977  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:59.108066  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:01.605247  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:58.979752  303486 cri.go:89] found id: ""
	I0920 19:06:58.979787  303486 logs.go:276] 0 containers: []
	W0920 19:06:58.979798  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:06:58.979807  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:58.979864  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:59.016613  303486 cri.go:89] found id: ""
	I0920 19:06:59.016649  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.016661  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:06:59.016670  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:59.016735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:59.052012  303486 cri.go:89] found id: ""
	I0920 19:06:59.052039  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.052047  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:06:59.052054  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:59.052106  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:59.090102  303486 cri.go:89] found id: ""
	I0920 19:06:59.090140  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.090152  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:06:59.090159  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:59.090213  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:59.128028  303486 cri.go:89] found id: ""
	I0920 19:06:59.128057  303486 logs.go:276] 0 containers: []
	W0920 19:06:59.128068  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:06:59.128080  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:59.128096  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:59.142966  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:59.143012  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:06:59.227311  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:06:59.227336  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:06:59.227357  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:06:59.308319  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:06:59.308366  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:59.347299  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:59.347336  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:01.897644  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:01.912876  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:01.912951  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:01.956550  303486 cri.go:89] found id: ""
	I0920 19:07:01.956679  303486 logs.go:276] 0 containers: []
	W0920 19:07:01.956690  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:01.956700  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:01.956765  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:01.995391  303486 cri.go:89] found id: ""
	I0920 19:07:01.995425  303486 logs.go:276] 0 containers: []
	W0920 19:07:01.995433  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:01.995440  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:01.995501  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:02.031149  303486 cri.go:89] found id: ""
	I0920 19:07:02.031181  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.031193  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:02.031202  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:02.031273  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:02.065856  303486 cri.go:89] found id: ""
	I0920 19:07:02.065885  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.065894  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:02.065924  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:02.065981  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:02.101974  303486 cri.go:89] found id: ""
	I0920 19:07:02.102018  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.102032  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:02.102041  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:02.102115  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:02.138108  303486 cri.go:89] found id: ""
	I0920 19:07:02.138142  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.138151  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:02.138156  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:02.138217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:02.170136  303486 cri.go:89] found id: ""
	I0920 19:07:02.170165  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.170173  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:02.170179  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:02.170244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:02.203944  303486 cri.go:89] found id: ""
	I0920 19:07:02.203969  303486 logs.go:276] 0 containers: []
	W0920 19:07:02.203978  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:02.203991  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:02.204008  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:02.256635  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:02.256679  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:02.270266  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:02.270303  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:02.341145  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:02.341182  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:02.341199  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:02.415133  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:02.415175  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:00.283726  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:02.285304  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:02.462310  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:04.462872  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:03.605300  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:06.105872  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:04.952448  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:04.966632  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:04.966702  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:05.001098  303486 cri.go:89] found id: ""
	I0920 19:07:05.001131  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.001141  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:05.001149  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:05.001217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:05.038160  303486 cri.go:89] found id: ""
	I0920 19:07:05.038186  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.038196  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:05.038202  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:05.038260  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:05.083301  303486 cri.go:89] found id: ""
	I0920 19:07:05.083346  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.083357  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:05.083365  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:05.083436  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:05.118916  303486 cri.go:89] found id: ""
	I0920 19:07:05.118952  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.118964  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:05.118972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:05.119065  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:05.157452  303486 cri.go:89] found id: ""
	I0920 19:07:05.157485  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.157496  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:05.157511  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:05.157587  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:05.197100  303486 cri.go:89] found id: ""
	I0920 19:07:05.197133  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.197143  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:05.197152  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:05.197225  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:05.231286  303486 cri.go:89] found id: ""
	I0920 19:07:05.231317  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.231328  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:05.231336  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:05.231409  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:05.269798  303486 cri.go:89] found id: ""
	I0920 19:07:05.269835  303486 logs.go:276] 0 containers: []
	W0920 19:07:05.269847  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:05.269862  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:05.269882  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:05.310029  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:05.310068  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:05.360493  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:05.360537  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:05.373771  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:05.373815  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:05.449860  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:05.449886  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:05.449924  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:08.034520  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:08.049970  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:08.050040  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:08.084683  303486 cri.go:89] found id: ""
	I0920 19:07:08.084714  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.084724  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:08.084731  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:08.084799  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:08.121150  303486 cri.go:89] found id: ""
	I0920 19:07:08.121176  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.121183  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:08.121190  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:08.121244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:08.157830  303486 cri.go:89] found id: ""
	I0920 19:07:08.157865  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.157877  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:08.157891  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:08.157967  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:08.191040  303486 cri.go:89] found id: ""
	I0920 19:07:08.191082  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.191094  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:08.191102  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:08.191169  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:08.230194  303486 cri.go:89] found id: ""
	I0920 19:07:08.230230  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.230239  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:08.230246  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:08.230304  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:08.268526  303486 cri.go:89] found id: ""
	I0920 19:07:08.268558  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.268566  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:08.268573  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:08.268631  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:08.302383  303486 cri.go:89] found id: ""
	I0920 19:07:08.302411  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.302420  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:08.302428  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:08.302492  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:08.336435  303486 cri.go:89] found id: ""
	I0920 19:07:08.336469  303486 logs.go:276] 0 containers: []
	W0920 19:07:08.336479  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:08.336491  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:08.336505  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:08.418086  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:08.418129  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:08.458355  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:08.458391  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:08.507017  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:08.507062  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:08.522701  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:08.522737  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:08.592777  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:04.784475  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:07.283612  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:09.286218  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:06.963106  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:09.462861  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:08.108458  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:10.605447  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:12.605992  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:11.093689  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:11.107438  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:11.107503  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:11.139701  303486 cri.go:89] found id: ""
	I0920 19:07:11.139742  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.139755  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:11.139765  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:11.139822  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:11.196143  303486 cri.go:89] found id: ""
	I0920 19:07:11.196182  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.196191  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:11.196197  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:11.196268  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:11.232121  303486 cri.go:89] found id: ""
	I0920 19:07:11.232156  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.232164  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:11.232171  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:11.232238  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:11.267307  303486 cri.go:89] found id: ""
	I0920 19:07:11.267338  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.267349  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:11.267358  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:11.267423  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:11.306583  303486 cri.go:89] found id: ""
	I0920 19:07:11.306614  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.306623  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:11.306631  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:11.306698  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:11.348162  303486 cri.go:89] found id: ""
	I0920 19:07:11.348188  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.348196  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:11.348203  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:11.348257  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:11.383612  303486 cri.go:89] found id: ""
	I0920 19:07:11.383649  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.383660  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:11.383669  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:11.383736  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:11.417538  303486 cri.go:89] found id: ""
	I0920 19:07:11.417575  303486 logs.go:276] 0 containers: []
	W0920 19:07:11.417583  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:11.417593  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:11.417609  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:11.470242  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:11.470282  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:11.485448  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:11.485480  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:11.559466  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:11.559495  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:11.559513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:11.636080  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:11.636133  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:11.783461  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:13.783785  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:11.462940  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:13.963340  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:14.609611  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:17.105222  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:14.177278  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:14.190413  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:14.190483  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:14.224238  303486 cri.go:89] found id: ""
	I0920 19:07:14.224264  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.224272  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:14.224278  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:14.224330  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:14.265253  303486 cri.go:89] found id: ""
	I0920 19:07:14.265285  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.265297  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:14.265304  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:14.265357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:14.300591  303486 cri.go:89] found id: ""
	I0920 19:07:14.300619  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.300633  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:14.300639  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:14.300695  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:14.335638  303486 cri.go:89] found id: ""
	I0920 19:07:14.335669  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.335677  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:14.335683  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:14.335735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:14.369291  303486 cri.go:89] found id: ""
	I0920 19:07:14.369328  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.369336  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:14.369344  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:14.369397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:14.404913  303486 cri.go:89] found id: ""
	I0920 19:07:14.404947  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.404958  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:14.404967  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:14.405034  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:14.438793  303486 cri.go:89] found id: ""
	I0920 19:07:14.438834  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.438845  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:14.438856  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:14.438926  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:14.475268  303486 cri.go:89] found id: ""
	I0920 19:07:14.475297  303486 logs.go:276] 0 containers: []
	W0920 19:07:14.475305  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:14.475321  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:14.475342  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:14.528066  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:14.528126  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:14.542850  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:14.542891  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:14.612772  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:14.612800  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:14.612819  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:14.694528  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:14.694579  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:17.234389  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:17.247479  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:17.247544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:17.285461  303486 cri.go:89] found id: ""
	I0920 19:07:17.285488  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.285496  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:17.285502  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:17.285553  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:17.320580  303486 cri.go:89] found id: ""
	I0920 19:07:17.320606  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.320614  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:17.320620  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:17.320677  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:17.356405  303486 cri.go:89] found id: ""
	I0920 19:07:17.356440  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.356462  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:17.356471  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:17.356526  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:17.391268  303486 cri.go:89] found id: ""
	I0920 19:07:17.391301  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.391309  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:17.391316  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:17.391381  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:17.429886  303486 cri.go:89] found id: ""
	I0920 19:07:17.429938  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.429950  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:17.429959  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:17.430022  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:17.466059  303486 cri.go:89] found id: ""
	I0920 19:07:17.466093  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.466104  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:17.466111  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:17.466176  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:17.501128  303486 cri.go:89] found id: ""
	I0920 19:07:17.501159  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.501168  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:17.501174  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:17.501247  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:17.536969  303486 cri.go:89] found id: ""
	I0920 19:07:17.536999  303486 logs.go:276] 0 containers: []
	W0920 19:07:17.537007  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:17.537016  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:17.537031  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:17.592071  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:17.592119  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:17.609022  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:17.609057  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:17.696393  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:17.696420  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:17.696434  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:17.778077  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:17.778122  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:15.785002  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:18.284101  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:16.463809  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:18.964348  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:19.604758  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:21.608192  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:20.319211  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:20.332158  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:20.332235  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:20.366195  303486 cri.go:89] found id: ""
	I0920 19:07:20.366230  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.366241  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:20.366250  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:20.366313  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:20.401786  303486 cri.go:89] found id: ""
	I0920 19:07:20.401819  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.401829  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:20.401846  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:20.401943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:20.433684  303486 cri.go:89] found id: ""
	I0920 19:07:20.433711  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.433719  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:20.433725  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:20.433783  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:20.469495  303486 cri.go:89] found id: ""
	I0920 19:07:20.469524  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.469535  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:20.469543  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:20.469613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:20.502214  303486 cri.go:89] found id: ""
	I0920 19:07:20.502245  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.502256  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:20.502263  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:20.502329  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:20.535829  303486 cri.go:89] found id: ""
	I0920 19:07:20.535867  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.535879  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:20.535887  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:20.535952  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:20.569605  303486 cri.go:89] found id: ""
	I0920 19:07:20.569635  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.569643  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:20.569654  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:20.569726  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:20.603676  303486 cri.go:89] found id: ""
	I0920 19:07:20.603699  303486 logs.go:276] 0 containers: []
	W0920 19:07:20.603706  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:20.603715  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:20.603726  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:20.656645  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:20.656692  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:20.671077  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:20.671107  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:20.740996  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:20.741028  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:20.741046  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:20.820541  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:20.820592  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:23.362973  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:23.380350  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:23.380432  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:23.423145  303486 cri.go:89] found id: ""
	I0920 19:07:23.423183  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.423193  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:23.423202  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:23.423272  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:23.459019  303486 cri.go:89] found id: ""
	I0920 19:07:23.459057  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.459068  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:23.459077  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:23.459144  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:23.502876  303486 cri.go:89] found id: ""
	I0920 19:07:23.502908  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.502920  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:23.502929  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:23.502994  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:23.538440  303486 cri.go:89] found id: ""
	I0920 19:07:23.538471  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.538481  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:23.538489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:23.538552  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:23.575164  303486 cri.go:89] found id: ""
	I0920 19:07:23.575199  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.575211  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:23.575220  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:23.575296  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:23.610449  303486 cri.go:89] found id: ""
	I0920 19:07:23.610480  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.610489  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:23.610495  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:23.610562  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:23.644164  303486 cri.go:89] found id: ""
	I0920 19:07:23.644195  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.644203  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:23.644209  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:23.644275  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:23.684379  303486 cri.go:89] found id: ""
	I0920 19:07:23.684417  303486 logs.go:276] 0 containers: []
	W0920 19:07:23.684428  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:23.684442  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:23.684459  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:23.762838  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:23.762885  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:23.805616  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:23.805650  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:23.857080  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:23.857122  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:23.870602  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:23.870635  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:23.941187  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:20.284264  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:22.284388  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:24.285108  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:21.462493  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:23.467933  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:25.963071  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:24.106087  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:26.605442  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:26.441571  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:26.455091  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:26.455185  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:26.489658  303486 cri.go:89] found id: ""
	I0920 19:07:26.489696  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.489707  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:26.489716  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:26.489773  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:26.528829  303486 cri.go:89] found id: ""
	I0920 19:07:26.528865  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.528878  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:26.528886  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:26.528966  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:26.568402  303486 cri.go:89] found id: ""
	I0920 19:07:26.568429  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.568443  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:26.568450  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:26.568503  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:26.606654  303486 cri.go:89] found id: ""
	I0920 19:07:26.606683  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.606693  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:26.606701  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:26.606764  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:26.640825  303486 cri.go:89] found id: ""
	I0920 19:07:26.640856  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.640864  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:26.640871  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:26.640934  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:26.677023  303486 cri.go:89] found id: ""
	I0920 19:07:26.677054  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.677062  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:26.677068  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:26.677123  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:26.712921  303486 cri.go:89] found id: ""
	I0920 19:07:26.712956  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.712964  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:26.712971  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:26.713031  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:26.747750  303486 cri.go:89] found id: ""
	I0920 19:07:26.747778  303486 logs.go:276] 0 containers: []
	W0920 19:07:26.747786  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:26.747796  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:26.747810  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:26.799240  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:26.799283  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:26.813197  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:26.813233  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:26.882751  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:26.882780  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:26.882799  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:26.965108  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:26.965146  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:26.784306  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:29.283573  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:28.461526  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:30.462242  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:28.606602  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:31.106657  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:29.503960  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:29.516601  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:29.516669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:29.555581  303486 cri.go:89] found id: ""
	I0920 19:07:29.555622  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.555632  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:29.555640  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:29.555711  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:29.593858  303486 cri.go:89] found id: ""
	I0920 19:07:29.593885  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.593923  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:29.593937  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:29.593990  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:29.629507  303486 cri.go:89] found id: ""
	I0920 19:07:29.629538  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.629548  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:29.629557  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:29.629616  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:29.662880  303486 cri.go:89] found id: ""
	I0920 19:07:29.662913  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.662921  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:29.662928  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:29.662976  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:29.695422  303486 cri.go:89] found id: ""
	I0920 19:07:29.695448  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.695458  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:29.695466  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:29.695531  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:29.730641  303486 cri.go:89] found id: ""
	I0920 19:07:29.730673  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.730685  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:29.730693  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:29.730756  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:29.764186  303486 cri.go:89] found id: ""
	I0920 19:07:29.764220  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.764229  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:29.764238  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:29.764302  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:29.804146  303486 cri.go:89] found id: ""
	I0920 19:07:29.804174  303486 logs.go:276] 0 containers: []
	W0920 19:07:29.804182  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:29.804191  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:29.804204  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:29.885573  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:29.885633  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:29.924619  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:29.924667  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:29.978187  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:29.978230  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:29.992161  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:29.992190  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:30.069767  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:32.570197  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:32.583160  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:32.583244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:32.620842  303486 cri.go:89] found id: ""
	I0920 19:07:32.620870  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.620881  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:32.620899  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:32.620958  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:32.657169  303486 cri.go:89] found id: ""
	I0920 19:07:32.657205  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.657216  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:32.657225  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:32.657292  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:32.694773  303486 cri.go:89] found id: ""
	I0920 19:07:32.694802  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.694809  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:32.694815  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:32.694882  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:32.733318  303486 cri.go:89] found id: ""
	I0920 19:07:32.733350  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.733360  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:32.733370  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:32.733436  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:32.766019  303486 cri.go:89] found id: ""
	I0920 19:07:32.766052  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.766062  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:32.766070  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:32.766138  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:32.801412  303486 cri.go:89] found id: ""
	I0920 19:07:32.801443  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.801454  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:32.801463  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:32.801533  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:32.833743  303486 cri.go:89] found id: ""
	I0920 19:07:32.833771  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.833779  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:32.833787  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:32.833847  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:32.866775  303486 cri.go:89] found id: ""
	I0920 19:07:32.866803  303486 logs.go:276] 0 containers: []
	W0920 19:07:32.866811  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:32.866821  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:32.866839  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:32.919257  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:32.919310  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:32.933554  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:32.933602  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:33.002657  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:33.002702  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:33.002721  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:33.081271  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:33.081316  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:31.284488  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:33.782998  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:32.462645  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:34.963285  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:33.609072  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:36.107460  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:35.627131  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:35.640958  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:35.641032  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:35.675943  303486 cri.go:89] found id: ""
	I0920 19:07:35.675976  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.675984  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:35.675991  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:35.676044  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:35.710075  303486 cri.go:89] found id: ""
	I0920 19:07:35.710104  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.710116  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:35.710124  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:35.710194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:35.747890  303486 cri.go:89] found id: ""
	I0920 19:07:35.747920  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.747931  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:35.747939  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:35.748004  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:35.786197  303486 cri.go:89] found id: ""
	I0920 19:07:35.786231  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.786242  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:35.786252  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:35.786314  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:35.819109  303486 cri.go:89] found id: ""
	I0920 19:07:35.819146  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.819158  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:35.819168  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:35.819244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:35.853244  303486 cri.go:89] found id: ""
	I0920 19:07:35.853282  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.853292  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:35.853301  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:35.853378  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:35.886864  303486 cri.go:89] found id: ""
	I0920 19:07:35.886897  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.886908  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:35.886917  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:35.886986  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:35.920872  303486 cri.go:89] found id: ""
	I0920 19:07:35.920906  303486 logs.go:276] 0 containers: []
	W0920 19:07:35.920917  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:35.920939  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:35.920957  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:35.998741  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:35.998794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:36.040681  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:36.040720  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:36.095848  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:36.095909  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:36.110903  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:36.110939  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:36.186658  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:38.687762  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:38.701640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:38.701708  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:38.734908  303486 cri.go:89] found id: ""
	I0920 19:07:38.734946  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.734956  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:38.734966  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:38.735031  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:38.768062  303486 cri.go:89] found id: ""
	I0920 19:07:38.768100  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.768112  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:38.768120  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:38.768188  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:38.800881  303486 cri.go:89] found id: ""
	I0920 19:07:38.800915  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.800927  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:38.800936  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:38.801004  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:38.835119  303486 cri.go:89] found id: ""
	I0920 19:07:38.835148  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.835156  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:38.835164  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:38.835223  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:38.872677  303486 cri.go:89] found id: ""
	I0920 19:07:38.872712  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.872723  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:38.872733  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:38.872807  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:38.913921  303486 cri.go:89] found id: ""
	I0920 19:07:38.913955  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.913965  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:38.913972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:38.914029  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:35.783443  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.284549  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:36.963668  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.963893  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.608347  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:41.106313  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:38.951849  303486 cri.go:89] found id: ""
	I0920 19:07:38.951882  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.951893  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:38.951902  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:38.951972  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:38.988117  303486 cri.go:89] found id: ""
	I0920 19:07:38.988149  303486 logs.go:276] 0 containers: []
	W0920 19:07:38.988161  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:38.988177  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:38.988191  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:39.028804  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:39.028843  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:39.083374  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:39.083427  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:39.097434  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:39.097463  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:39.172185  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:39.172213  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:39.172226  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:41.756648  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:41.772358  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:41.772432  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:41.809067  303486 cri.go:89] found id: ""
	I0920 19:07:41.809109  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.809123  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:41.809132  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:41.809191  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:41.853413  303486 cri.go:89] found id: ""
	I0920 19:07:41.853445  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.853457  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:41.853465  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:41.853524  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:41.891536  303486 cri.go:89] found id: ""
	I0920 19:07:41.891569  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.891580  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:41.891588  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:41.891668  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:41.931046  303486 cri.go:89] found id: ""
	I0920 19:07:41.931085  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.931093  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:41.931099  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:41.931155  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:41.968120  303486 cri.go:89] found id: ""
	I0920 19:07:41.968152  303486 logs.go:276] 0 containers: []
	W0920 19:07:41.968164  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:41.968172  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:41.968240  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:42.002478  303486 cri.go:89] found id: ""
	I0920 19:07:42.002512  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.002523  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:42.002532  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:42.002599  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:42.038031  303486 cri.go:89] found id: ""
	I0920 19:07:42.038067  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.038080  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:42.038087  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:42.038150  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:42.072124  303486 cri.go:89] found id: ""
	I0920 19:07:42.072155  303486 logs.go:276] 0 containers: []
	W0920 19:07:42.072166  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:42.072178  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:42.072195  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:42.128217  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:42.128259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:42.142291  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:42.142322  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:42.215278  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:42.215305  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:42.215324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:42.293431  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:42.293476  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:40.784191  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.283580  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:41.463429  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.963059  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:43.608790  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:46.105338  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:44.836094  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:44.850327  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:44.850397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:44.884595  303486 cri.go:89] found id: ""
	I0920 19:07:44.884624  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.884632  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:44.884639  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:44.884711  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:44.917727  303486 cri.go:89] found id: ""
	I0920 19:07:44.917754  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.917763  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:44.917769  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:44.917837  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:44.955821  303486 cri.go:89] found id: ""
	I0920 19:07:44.955860  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.955871  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:44.955879  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:44.955937  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:44.994543  303486 cri.go:89] found id: ""
	I0920 19:07:44.994579  303486 logs.go:276] 0 containers: []
	W0920 19:07:44.994590  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:44.994598  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:44.994651  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:45.031839  303486 cri.go:89] found id: ""
	I0920 19:07:45.031877  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.031888  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:45.031896  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:45.031962  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:45.070554  303486 cri.go:89] found id: ""
	I0920 19:07:45.070588  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.070601  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:45.070609  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:45.070678  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:45.108727  303486 cri.go:89] found id: ""
	I0920 19:07:45.108760  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.108771  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:45.108779  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:45.108855  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:45.144045  303486 cri.go:89] found id: ""
	I0920 19:07:45.144075  303486 logs.go:276] 0 containers: []
	W0920 19:07:45.144083  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:45.144094  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:45.144108  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:45.185800  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:45.185834  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:45.238364  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:45.238410  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:45.252111  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:45.252145  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:45.329009  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:45.329036  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:45.329051  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:47.912910  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:47.926378  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:47.926458  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:47.961067  303486 cri.go:89] found id: ""
	I0920 19:07:47.961094  303486 logs.go:276] 0 containers: []
	W0920 19:07:47.961103  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:47.961111  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:47.961172  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:48.006680  303486 cri.go:89] found id: ""
	I0920 19:07:48.006717  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.006729  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:48.006738  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:48.006805  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:48.042230  303486 cri.go:89] found id: ""
	I0920 19:07:48.042261  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.042272  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:48.042281  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:48.042349  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:48.080779  303486 cri.go:89] found id: ""
	I0920 19:07:48.080836  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.080850  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:48.080860  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:48.080931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:48.119439  303486 cri.go:89] found id: ""
	I0920 19:07:48.119469  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.119477  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:48.119483  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:48.119536  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:48.156219  303486 cri.go:89] found id: ""
	I0920 19:07:48.156258  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.156269  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:48.156279  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:48.156354  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:48.192112  303486 cri.go:89] found id: ""
	I0920 19:07:48.192151  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.192162  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:48.192170  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:48.192240  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:48.228916  303486 cri.go:89] found id: ""
	I0920 19:07:48.228958  303486 logs.go:276] 0 containers: []
	W0920 19:07:48.228968  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:48.228981  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:48.229003  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:48.284073  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:48.284115  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:48.297677  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:48.297713  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:48.374834  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:48.374860  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:48.374876  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:48.455468  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:48.455512  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:45.284055  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:47.783744  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:46.461832  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:48.462980  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:50.463485  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:48.605035  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:51.105952  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:50.998354  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:51.012827  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:51.012904  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:51.046701  303486 cri.go:89] found id: ""
	I0920 19:07:51.046739  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.046750  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:51.046758  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:51.046827  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:51.083829  303486 cri.go:89] found id: ""
	I0920 19:07:51.083867  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.083878  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:51.083891  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:51.083965  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:51.124126  303486 cri.go:89] found id: ""
	I0920 19:07:51.124170  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.124180  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:51.124187  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:51.124254  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:51.159141  303486 cri.go:89] found id: ""
	I0920 19:07:51.159175  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.159184  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:51.159190  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:51.159253  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:51.192793  303486 cri.go:89] found id: ""
	I0920 19:07:51.192829  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.192840  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:51.192863  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:51.192938  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:51.225489  303486 cri.go:89] found id: ""
	I0920 19:07:51.225515  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.225524  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:51.225530  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:51.225582  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:51.258256  303486 cri.go:89] found id: ""
	I0920 19:07:51.258283  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.258294  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:51.258301  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:51.258363  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:51.292474  303486 cri.go:89] found id: ""
	I0920 19:07:51.292504  303486 logs.go:276] 0 containers: []
	W0920 19:07:51.292512  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:51.292522  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:51.292537  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:51.331386  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:51.331422  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:51.385136  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:51.385182  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:51.400792  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:51.400828  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:51.492771  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:51.492795  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:51.492810  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:49.784132  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:52.284075  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:54.284870  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:52.963813  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:55.464095  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:53.607259  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:56.106592  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:54.074889  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:54.088453  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:54.088534  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:54.125096  303486 cri.go:89] found id: ""
	I0920 19:07:54.125138  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.125159  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:54.125166  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:54.125231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:54.159630  303486 cri.go:89] found id: ""
	I0920 19:07:54.159665  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.159676  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:54.159685  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:54.159759  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:54.195919  303486 cri.go:89] found id: ""
	I0920 19:07:54.195951  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.195965  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:54.195972  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:54.196042  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:54.230294  303486 cri.go:89] found id: ""
	I0920 19:07:54.230323  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.230332  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:54.230339  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:54.230396  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:54.266764  303486 cri.go:89] found id: ""
	I0920 19:07:54.266793  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.266800  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:54.266807  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:54.266865  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:54.300704  303486 cri.go:89] found id: ""
	I0920 19:07:54.300731  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.300741  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:54.300750  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:54.300817  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:54.334447  303486 cri.go:89] found id: ""
	I0920 19:07:54.334473  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.334480  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:54.334487  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:54.334546  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:54.369814  303486 cri.go:89] found id: ""
	I0920 19:07:54.369858  303486 logs.go:276] 0 containers: []
	W0920 19:07:54.369866  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:54.369878  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:54.369890  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:54.423088  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:54.423135  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:54.436770  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:54.436801  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:54.510731  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:54.510757  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:54.510773  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:54.593041  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:54.593091  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:57.134030  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:57.147605  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:07:57.147674  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:07:57.202662  303486 cri.go:89] found id: ""
	I0920 19:07:57.202690  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.202699  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:07:57.202705  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:07:57.202757  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:07:57.236448  303486 cri.go:89] found id: ""
	I0920 19:07:57.236476  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.236484  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:07:57.236493  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:07:57.236558  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:07:57.269450  303486 cri.go:89] found id: ""
	I0920 19:07:57.269478  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.269485  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:07:57.269491  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:07:57.269544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:07:57.305749  303486 cri.go:89] found id: ""
	I0920 19:07:57.305784  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.305795  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:07:57.305806  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:07:57.305877  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:07:57.339802  303486 cri.go:89] found id: ""
	I0920 19:07:57.339844  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.339857  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:07:57.339866  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:07:57.339942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:07:57.371929  303486 cri.go:89] found id: ""
	I0920 19:07:57.371962  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.371971  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:07:57.371980  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:07:57.372051  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:07:57.405749  303486 cri.go:89] found id: ""
	I0920 19:07:57.405789  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.405802  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:07:57.405812  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:07:57.405888  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:07:57.439259  303486 cri.go:89] found id: ""
	I0920 19:07:57.439291  303486 logs.go:276] 0 containers: []
	W0920 19:07:57.439300  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:07:57.439310  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:07:57.439323  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:07:57.491405  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:07:57.491450  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:07:57.505992  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:07:57.506027  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:07:57.580598  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:07:57.580623  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:07:57.580638  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:07:57.659475  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:07:57.659513  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:07:56.783867  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:58.783944  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:57.465789  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:59.963589  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:07:58.606492  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:01.105967  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:00.201478  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:00.217162  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:00.217228  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:00.252219  303486 cri.go:89] found id: ""
	I0920 19:08:00.252247  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.252256  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:00.252263  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:00.252334  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:00.287244  303486 cri.go:89] found id: ""
	I0920 19:08:00.287283  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.287295  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:00.287302  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:00.287367  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:00.325785  303486 cri.go:89] found id: ""
	I0920 19:08:00.325818  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.325829  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:00.325839  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:00.325931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:00.359718  303486 cri.go:89] found id: ""
	I0920 19:08:00.359747  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.359757  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:00.359766  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:00.359847  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:00.399105  303486 cri.go:89] found id: ""
	I0920 19:08:00.399147  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.399156  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:00.399163  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:00.399227  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:00.433647  303486 cri.go:89] found id: ""
	I0920 19:08:00.433675  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.433683  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:00.433692  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:00.433756  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:00.467771  303486 cri.go:89] found id: ""
	I0920 19:08:00.467820  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.467832  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:00.467841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:00.467911  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:00.511320  303486 cri.go:89] found id: ""
	I0920 19:08:00.511363  303486 logs.go:276] 0 containers: []
	W0920 19:08:00.511376  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:00.511392  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:00.511414  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:00.594669  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:00.594703  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:00.594723  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:00.672747  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:00.672800  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:00.710001  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:00.710049  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:00.760333  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:00.760378  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:03.274393  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:03.289260  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:03.289352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:03.327884  303486 cri.go:89] found id: ""
	I0920 19:08:03.327919  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.327932  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:03.327942  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:03.328015  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:03.367259  303486 cri.go:89] found id: ""
	I0920 19:08:03.367289  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.367297  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:03.367303  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:03.367361  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:03.405843  303486 cri.go:89] found id: ""
	I0920 19:08:03.405899  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.405932  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:03.405942  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:03.406056  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:03.441026  303486 cri.go:89] found id: ""
	I0920 19:08:03.441058  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.441069  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:03.441078  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:03.441147  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:03.477213  303486 cri.go:89] found id: ""
	I0920 19:08:03.477249  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.477261  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:03.477327  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:03.477415  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:03.515843  303486 cri.go:89] found id: ""
	I0920 19:08:03.515880  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.515888  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:03.515895  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:03.515945  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:03.566972  303486 cri.go:89] found id: ""
	I0920 19:08:03.567009  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.567020  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:03.567028  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:03.567097  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:03.616957  303486 cri.go:89] found id: ""
	I0920 19:08:03.617000  303486 logs.go:276] 0 containers: []
	W0920 19:08:03.617013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:03.617029  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:03.617048  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:03.683140  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:03.683192  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:03.697225  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:03.697267  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:03.770430  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:03.770455  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:03.770478  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:03.848796  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:03.848836  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:01.284245  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:03.284437  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:01.964058  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:04.462786  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:03.607506  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.106008  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.387706  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:06.401600  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:06.401669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:06.437854  303486 cri.go:89] found id: ""
	I0920 19:08:06.437890  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.437917  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:06.437926  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:06.437993  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:06.472617  303486 cri.go:89] found id: ""
	I0920 19:08:06.472647  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.472655  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:06.472662  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:06.472718  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:06.510083  303486 cri.go:89] found id: ""
	I0920 19:08:06.510118  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.510131  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:06.510140  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:06.510212  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:06.546388  303486 cri.go:89] found id: ""
	I0920 19:08:06.546418  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.546427  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:06.546434  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:06.546485  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:06.584043  303486 cri.go:89] found id: ""
	I0920 19:08:06.584084  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.584096  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:06.584106  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:06.584182  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:06.622118  303486 cri.go:89] found id: ""
	I0920 19:08:06.622147  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.622155  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:06.622161  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:06.622217  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:06.655513  303486 cri.go:89] found id: ""
	I0920 19:08:06.655552  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.655585  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:06.655593  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:06.655657  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:06.690286  303486 cri.go:89] found id: ""
	I0920 19:08:06.690324  303486 logs.go:276] 0 containers: []
	W0920 19:08:06.690336  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:06.690350  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:06.690368  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:06.729229  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:06.729259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:06.780368  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:06.780411  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:06.794746  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:06.794782  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:06.866918  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:06.866944  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:06.866967  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:05.784123  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.284383  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:06.462855  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.466867  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:10.963736  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:08.106490  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:10.606291  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:09.451583  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:09.465111  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:09.465178  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:09.497679  303486 cri.go:89] found id: ""
	I0920 19:08:09.497713  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.497725  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:09.497733  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:09.497797  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:09.535297  303486 cri.go:89] found id: ""
	I0920 19:08:09.535334  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.535345  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:09.535353  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:09.535427  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:09.572449  303486 cri.go:89] found id: ""
	I0920 19:08:09.572482  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.572491  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:09.572498  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:09.572608  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:09.612672  303486 cri.go:89] found id: ""
	I0920 19:08:09.612697  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.612705  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:09.612711  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:09.612797  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:09.654366  303486 cri.go:89] found id: ""
	I0920 19:08:09.654399  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.654408  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:09.654415  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:09.654470  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:09.694825  303486 cri.go:89] found id: ""
	I0920 19:08:09.694858  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.694870  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:09.694878  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:09.694942  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:09.731618  303486 cri.go:89] found id: ""
	I0920 19:08:09.731682  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.731693  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:09.731702  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:09.731775  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:09.766717  303486 cri.go:89] found id: ""
	I0920 19:08:09.766755  303486 logs.go:276] 0 containers: []
	W0920 19:08:09.766765  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:09.766779  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:09.766794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:09.823505  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:09.823549  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:09.837622  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:09.837658  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:09.919105  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:09.919139  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:09.919156  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:10.000899  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:10.000943  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:12.542974  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:12.557265  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:12.557335  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:12.594099  303486 cri.go:89] found id: ""
	I0920 19:08:12.594126  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.594134  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:12.594140  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:12.594199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:12.627271  303486 cri.go:89] found id: ""
	I0920 19:08:12.627301  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.627308  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:12.627314  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:12.627366  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:12.661225  303486 cri.go:89] found id: ""
	I0920 19:08:12.661256  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.661265  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:12.661272  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:12.661332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:12.701381  303486 cri.go:89] found id: ""
	I0920 19:08:12.701424  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.701437  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:12.701447  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:12.701524  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:12.739189  303486 cri.go:89] found id: ""
	I0920 19:08:12.739227  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.739235  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:12.739246  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:12.739299  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:12.780931  303486 cri.go:89] found id: ""
	I0920 19:08:12.780958  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.781055  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:12.781068  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:12.781124  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:12.818097  303486 cri.go:89] found id: ""
	I0920 19:08:12.818137  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.818150  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:12.818161  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:12.818294  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:12.852925  303486 cri.go:89] found id: ""
	I0920 19:08:12.852957  303486 logs.go:276] 0 containers: []
	W0920 19:08:12.852965  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:12.852975  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:12.852990  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:12.924746  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:12.924774  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:12.924791  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:13.005668  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:13.005718  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:13.044327  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:13.044359  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:13.094788  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:13.094833  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:10.284510  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:12.783546  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:12.964694  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.463615  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:13.105052  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.604922  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:15.611965  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:15.625857  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:15.625960  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:15.662138  303486 cri.go:89] found id: ""
	I0920 19:08:15.662169  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.662177  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:15.662184  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:15.662261  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:15.696000  303486 cri.go:89] found id: ""
	I0920 19:08:15.696067  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.696100  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:15.696115  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:15.696234  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:15.735594  303486 cri.go:89] found id: ""
	I0920 19:08:15.735625  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.735633  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:15.735640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:15.735699  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:15.774666  303486 cri.go:89] found id: ""
	I0920 19:08:15.774693  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.774703  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:15.774712  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:15.774777  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:15.810754  303486 cri.go:89] found id: ""
	I0920 19:08:15.810799  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.810811  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:15.810820  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:15.810884  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:15.846709  303486 cri.go:89] found id: ""
	I0920 19:08:15.846739  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.846748  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:15.846757  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:15.846819  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:15.880798  303486 cri.go:89] found id: ""
	I0920 19:08:15.880825  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.880833  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:15.880839  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:15.880895  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:15.915119  303486 cri.go:89] found id: ""
	I0920 19:08:15.915150  303486 logs.go:276] 0 containers: []
	W0920 19:08:15.915159  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:15.915170  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:15.915186  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:15.966048  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:15.966087  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:15.979287  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:15.979322  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:16.052129  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:16.052163  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:16.052180  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:16.137743  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:16.137788  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:18.678389  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:18.693073  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:18.693152  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:18.734909  303486 cri.go:89] found id: ""
	I0920 19:08:18.734943  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.734954  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:18.734962  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:18.735028  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:18.773472  303486 cri.go:89] found id: ""
	I0920 19:08:18.773506  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.773517  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:18.773525  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:18.773620  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:18.812184  303486 cri.go:89] found id: ""
	I0920 19:08:18.812218  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.812228  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:18.812236  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:18.812305  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:18.846569  303486 cri.go:89] found id: ""
	I0920 19:08:18.846608  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.846619  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:18.846627  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:18.846700  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:18.881794  303486 cri.go:89] found id: ""
	I0920 19:08:18.881836  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.881862  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:18.881870  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:18.881943  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:18.919657  303486 cri.go:89] found id: ""
	I0920 19:08:18.919688  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.919698  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:18.919708  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:18.919774  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:14.784734  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:17.283590  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:19.284056  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:17.962913  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:20.462190  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:18.105736  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:20.106314  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:22.605231  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:18.955117  303486 cri.go:89] found id: ""
	I0920 19:08:18.955146  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.955157  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:18.955166  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:18.955243  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:18.992389  303486 cri.go:89] found id: ""
	I0920 19:08:18.992422  303486 logs.go:276] 0 containers: []
	W0920 19:08:18.992430  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:18.992444  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:18.992460  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:19.070374  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:19.070417  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:19.110793  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:19.110825  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:19.163783  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:19.163830  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:19.177348  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:19.177387  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:19.249469  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:21.749644  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:21.764920  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:21.765006  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:21.803443  303486 cri.go:89] found id: ""
	I0920 19:08:21.803473  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.803481  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:21.803489  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:21.803545  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:21.844552  303486 cri.go:89] found id: ""
	I0920 19:08:21.844582  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.844593  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:21.844601  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:21.844672  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:21.878979  303486 cri.go:89] found id: ""
	I0920 19:08:21.879007  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.879017  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:21.879029  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:21.879099  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:21.915745  303486 cri.go:89] found id: ""
	I0920 19:08:21.915773  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.915783  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:21.915794  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:21.915865  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:21.948999  303486 cri.go:89] found id: ""
	I0920 19:08:21.949031  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.949043  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:21.949052  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:21.949118  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:21.984238  303486 cri.go:89] found id: ""
	I0920 19:08:21.984269  303486 logs.go:276] 0 containers: []
	W0920 19:08:21.984277  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:21.984284  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:21.984357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:22.018581  303486 cri.go:89] found id: ""
	I0920 19:08:22.018610  303486 logs.go:276] 0 containers: []
	W0920 19:08:22.018620  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:22.018628  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:22.018694  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:22.051868  303486 cri.go:89] found id: ""
	I0920 19:08:22.051903  303486 logs.go:276] 0 containers: []
	W0920 19:08:22.051913  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:22.051925  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:22.051942  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:22.106711  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:22.106756  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:22.120910  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:22.120940  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:22.196564  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:22.196591  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:22.196608  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:22.275235  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:22.275288  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:21.785129  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.284359  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:22.463122  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.962694  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:25.105050  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:27.105237  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:24.821956  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:24.836846  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:24.836918  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:24.878371  303486 cri.go:89] found id: ""
	I0920 19:08:24.878398  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.878406  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:24.878413  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:24.878464  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:24.911450  303486 cri.go:89] found id: ""
	I0920 19:08:24.911480  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.911489  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:24.911497  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:24.911590  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:24.949248  303486 cri.go:89] found id: ""
	I0920 19:08:24.949281  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.949289  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:24.949298  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:24.949352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:24.987899  303486 cri.go:89] found id: ""
	I0920 19:08:24.987932  303486 logs.go:276] 0 containers: []
	W0920 19:08:24.987939  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:24.987948  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:24.988011  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:25.020589  303486 cri.go:89] found id: ""
	I0920 19:08:25.020627  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.020638  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:25.020646  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:25.020701  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:25.060223  303486 cri.go:89] found id: ""
	I0920 19:08:25.060250  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.060258  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:25.060266  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:25.060331  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:25.099111  303486 cri.go:89] found id: ""
	I0920 19:08:25.099141  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.099151  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:25.099160  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:25.099242  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:25.136055  303486 cri.go:89] found id: ""
	I0920 19:08:25.136089  303486 logs.go:276] 0 containers: []
	W0920 19:08:25.136098  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:25.136118  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:25.136135  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:25.187619  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:25.187658  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:25.200983  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:25.201016  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:25.270746  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:25.270778  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:25.270795  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:25.350009  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:25.350050  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:27.889864  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:27.903156  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:27.903231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:27.935087  303486 cri.go:89] found id: ""
	I0920 19:08:27.935118  303486 logs.go:276] 0 containers: []
	W0920 19:08:27.935128  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:27.935138  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:27.935199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:27.970451  303486 cri.go:89] found id: ""
	I0920 19:08:27.970479  303486 logs.go:276] 0 containers: []
	W0920 19:08:27.970487  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:27.970494  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:27.970545  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:28.004931  303486 cri.go:89] found id: ""
	I0920 19:08:28.004980  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.004992  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:28.005002  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:28.005068  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:28.039438  303486 cri.go:89] found id: ""
	I0920 19:08:28.039470  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.039478  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:28.039485  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:28.039535  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:28.076023  303486 cri.go:89] found id: ""
	I0920 19:08:28.076050  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.076058  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:28.076064  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:28.076131  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:28.114726  303486 cri.go:89] found id: ""
	I0920 19:08:28.114761  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.114772  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:28.114781  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:28.114846  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:28.150790  303486 cri.go:89] found id: ""
	I0920 19:08:28.150822  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.150832  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:28.150841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:28.150908  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:28.186576  303486 cri.go:89] found id: ""
	I0920 19:08:28.186606  303486 logs.go:276] 0 containers: []
	W0920 19:08:28.186614  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:28.186626  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:28.186648  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:28.240939  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:28.240984  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:28.255267  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:28.255304  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:28.327773  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:28.327797  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:28.327809  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:28.418011  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:28.418055  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:26.785099  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:29.284297  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:26.962825  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:28.963261  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:30.963575  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:29.605453  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:32.104848  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:30.962398  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:30.975385  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:30.975471  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:31.009898  303486 cri.go:89] found id: ""
	I0920 19:08:31.009952  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.009964  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:31.009973  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:31.010044  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:31.043639  303486 cri.go:89] found id: ""
	I0920 19:08:31.043670  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.043679  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:31.043689  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:31.043758  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:31.077709  303486 cri.go:89] found id: ""
	I0920 19:08:31.077745  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.077753  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:31.077759  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:31.077818  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:31.111117  303486 cri.go:89] found id: ""
	I0920 19:08:31.111150  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.111160  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:31.111168  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:31.111234  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:31.143888  303486 cri.go:89] found id: ""
	I0920 19:08:31.143921  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.143933  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:31.143942  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:31.144014  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:31.176694  303486 cri.go:89] found id: ""
	I0920 19:08:31.176729  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.176742  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:31.176751  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:31.176815  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:31.213794  303486 cri.go:89] found id: ""
	I0920 19:08:31.213832  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.213844  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:31.213854  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:31.213946  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:31.250160  303486 cri.go:89] found id: ""
	I0920 19:08:31.250219  303486 logs.go:276] 0 containers: []
	W0920 19:08:31.250230  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:31.250244  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:31.250261  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:31.263748  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:31.263784  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:31.337719  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:31.337749  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:31.337762  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:31.420398  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:31.420446  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:31.459992  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:31.460030  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:31.284818  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:33.783288  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:33.462900  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:35.463122  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:34.105758  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:36.604917  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:34.014229  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:34.028129  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:34.028194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:34.060793  303486 cri.go:89] found id: ""
	I0920 19:08:34.060832  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.060850  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:34.060859  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:34.060919  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:34.094440  303486 cri.go:89] found id: ""
	I0920 19:08:34.094467  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.094475  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:34.094481  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:34.094544  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:34.128824  303486 cri.go:89] found id: ""
	I0920 19:08:34.128861  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.128872  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:34.128881  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:34.128948  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:34.160861  303486 cri.go:89] found id: ""
	I0920 19:08:34.160894  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.160903  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:34.160911  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:34.160967  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:34.196897  303486 cri.go:89] found id: ""
	I0920 19:08:34.196933  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.196952  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:34.196958  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:34.197020  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:34.229083  303486 cri.go:89] found id: ""
	I0920 19:08:34.229115  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.229125  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:34.229134  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:34.229205  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:34.261877  303486 cri.go:89] found id: ""
	I0920 19:08:34.261922  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.261933  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:34.261941  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:34.262008  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:34.296145  303486 cri.go:89] found id: ""
	I0920 19:08:34.296177  303486 logs.go:276] 0 containers: []
	W0920 19:08:34.296189  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:34.296199  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:34.296214  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:34.361598  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:34.361624  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:34.361641  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:34.441067  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:34.441110  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:34.483333  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:34.483362  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:34.538345  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:34.538388  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:37.053155  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:37.067157  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:37.067230  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:37.101432  303486 cri.go:89] found id: ""
	I0920 19:08:37.101466  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.101476  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:37.101485  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:37.101550  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:37.134375  303486 cri.go:89] found id: ""
	I0920 19:08:37.134408  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.134416  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:37.134423  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:37.134487  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:37.167049  303486 cri.go:89] found id: ""
	I0920 19:08:37.167087  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.167099  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:37.167107  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:37.167175  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:37.209358  303486 cri.go:89] found id: ""
	I0920 19:08:37.209387  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.209397  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:37.209405  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:37.209470  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:37.243227  303486 cri.go:89] found id: ""
	I0920 19:08:37.243261  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.243272  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:37.243281  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:37.243332  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:37.276546  303486 cri.go:89] found id: ""
	I0920 19:08:37.276596  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.276607  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:37.276626  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:37.276688  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:37.311233  303486 cri.go:89] found id: ""
	I0920 19:08:37.311268  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.311279  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:37.311287  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:37.311352  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:37.349970  303486 cri.go:89] found id: ""
	I0920 19:08:37.350003  303486 logs.go:276] 0 containers: []
	W0920 19:08:37.350013  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:37.350025  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:37.350041  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:37.399405  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:37.399445  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:37.423764  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:37.423800  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:37.498797  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:37.498826  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:37.498841  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:37.575521  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:37.575566  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:35.783897  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:37.784496  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:37.463224  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:39.463445  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:38.605444  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:40.606712  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:40.118650  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:40.131967  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:40.132051  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:40.165313  303486 cri.go:89] found id: ""
	I0920 19:08:40.165349  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.165358  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:40.165366  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:40.165439  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:40.197194  303486 cri.go:89] found id: ""
	I0920 19:08:40.197223  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.197232  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:40.197238  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:40.197289  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:40.236769  303486 cri.go:89] found id: ""
	I0920 19:08:40.236800  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.236810  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:40.236819  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:40.236888  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:40.271960  303486 cri.go:89] found id: ""
	I0920 19:08:40.271984  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.271992  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:40.271998  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:40.272049  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:40.307874  303486 cri.go:89] found id: ""
	I0920 19:08:40.307909  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.307917  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:40.307923  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:40.307982  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:40.342128  303486 cri.go:89] found id: ""
	I0920 19:08:40.342160  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.342168  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:40.342175  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:40.342233  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:40.381493  303486 cri.go:89] found id: ""
	I0920 19:08:40.381529  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.381542  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:40.381551  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:40.381617  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:40.415164  303486 cri.go:89] found id: ""
	I0920 19:08:40.415199  303486 logs.go:276] 0 containers: []
	W0920 19:08:40.415211  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:40.415222  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:40.415238  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:40.488306  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:40.488330  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:40.488350  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:40.567193  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:40.567235  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:40.607256  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:40.607287  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:40.659504  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:40.659542  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:43.174043  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:43.188690  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:43.188790  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:43.227223  303486 cri.go:89] found id: ""
	I0920 19:08:43.227251  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.227259  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:43.227267  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:43.227356  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:43.260099  303486 cri.go:89] found id: ""
	I0920 19:08:43.260128  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.260137  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:43.260143  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:43.260195  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:43.297846  303486 cri.go:89] found id: ""
	I0920 19:08:43.297875  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.297886  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:43.297894  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:43.297980  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:43.334026  303486 cri.go:89] found id: ""
	I0920 19:08:43.334061  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.334070  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:43.334078  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:43.334147  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:43.367765  303486 cri.go:89] found id: ""
	I0920 19:08:43.367795  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.367806  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:43.367814  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:43.367890  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:43.402722  303486 cri.go:89] found id: ""
	I0920 19:08:43.402766  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.402778  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:43.402787  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:43.402852  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:43.439643  303486 cri.go:89] found id: ""
	I0920 19:08:43.439674  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.439682  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:43.439690  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:43.439742  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:43.475931  303486 cri.go:89] found id: ""
	I0920 19:08:43.475965  303486 logs.go:276] 0 containers: []
	W0920 19:08:43.475976  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:43.475991  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:43.476006  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:43.545694  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:43.545725  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:43.545739  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:43.627493  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:43.627549  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:43.667758  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:43.667794  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:43.721803  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:43.721851  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:40.285524  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:42.784336  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:41.962300  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:43.963712  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:45.963766  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:43.105271  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:45.105737  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:47.604667  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:46.237499  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:46.250854  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:46.250925  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:46.288918  303486 cri.go:89] found id: ""
	I0920 19:08:46.288950  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.288957  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:46.288964  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:46.289026  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:46.321113  303486 cri.go:89] found id: ""
	I0920 19:08:46.321149  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.321159  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:46.321168  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:46.321239  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:46.359606  303486 cri.go:89] found id: ""
	I0920 19:08:46.359643  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.359652  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:46.359659  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:46.359729  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:46.397059  303486 cri.go:89] found id: ""
	I0920 19:08:46.397089  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.397098  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:46.397104  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:46.397174  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:46.438224  303486 cri.go:89] found id: ""
	I0920 19:08:46.438261  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.438271  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:46.438279  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:46.438355  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:46.476933  303486 cri.go:89] found id: ""
	I0920 19:08:46.476963  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.476973  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:46.476981  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:46.477047  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:46.522115  303486 cri.go:89] found id: ""
	I0920 19:08:46.522150  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.522160  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:46.522167  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:46.522236  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:46.555508  303486 cri.go:89] found id: ""
	I0920 19:08:46.555541  303486 logs.go:276] 0 containers: []
	W0920 19:08:46.555551  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:46.555565  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:46.555580  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:46.632314  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:46.632358  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:46.672381  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:46.672420  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:46.725777  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:46.725835  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:46.739924  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:46.739959  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:46.816667  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:45.284171  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:47.284420  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.284798  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:48.462088  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:50.463100  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.606279  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:52.105103  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:49.317620  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:49.331792  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:49.331872  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:49.365417  303486 cri.go:89] found id: ""
	I0920 19:08:49.365457  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.365470  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:49.365479  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:49.365543  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:49.399422  303486 cri.go:89] found id: ""
	I0920 19:08:49.399455  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.399465  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:49.399474  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:49.399532  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:49.433040  303486 cri.go:89] found id: ""
	I0920 19:08:49.433069  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.433076  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:49.433082  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:49.433149  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:49.466865  303486 cri.go:89] found id: ""
	I0920 19:08:49.466897  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.466909  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:49.466917  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:49.466986  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:49.499542  303486 cri.go:89] found id: ""
	I0920 19:08:49.499574  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.499583  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:49.499589  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:49.499639  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:49.534310  303486 cri.go:89] found id: ""
	I0920 19:08:49.534338  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.534346  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:49.534353  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:49.534411  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:49.580271  303486 cri.go:89] found id: ""
	I0920 19:08:49.580297  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.580305  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:49.580312  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:49.580385  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:49.626519  303486 cri.go:89] found id: ""
	I0920 19:08:49.626554  303486 logs.go:276] 0 containers: []
	W0920 19:08:49.626562  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:49.626572  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:49.626587  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:49.682923  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:49.682963  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:49.695859  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:49.695895  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:49.767626  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:49.767669  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:49.767697  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:49.849570  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:49.849614  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:52.387653  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:52.400693  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:52.400757  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:52.434320  303486 cri.go:89] found id: ""
	I0920 19:08:52.434358  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.434369  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:52.434381  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:52.434448  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:52.469167  303486 cri.go:89] found id: ""
	I0920 19:08:52.469202  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.469214  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:52.469222  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:52.469291  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:52.504241  303486 cri.go:89] found id: ""
	I0920 19:08:52.504287  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.504295  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:52.504304  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:52.504367  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:52.539573  303486 cri.go:89] found id: ""
	I0920 19:08:52.539604  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.539613  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:52.539619  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:52.539697  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:52.573794  303486 cri.go:89] found id: ""
	I0920 19:08:52.573821  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.573829  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:52.573834  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:52.573931  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:52.607628  303486 cri.go:89] found id: ""
	I0920 19:08:52.607660  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.607670  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:52.607676  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:52.607738  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:52.639088  303486 cri.go:89] found id: ""
	I0920 19:08:52.639121  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.639132  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:52.639140  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:52.639204  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:52.673585  303486 cri.go:89] found id: ""
	I0920 19:08:52.673624  303486 logs.go:276] 0 containers: []
	W0920 19:08:52.673636  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:52.673650  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:52.673667  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:52.726463  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:52.726504  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:52.739520  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:52.739553  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:52.820610  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:52.820638  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:52.820653  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:52.898567  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:52.898612  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:51.783687  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:53.784963  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:52.962326  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:54.963069  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:54.105159  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:56.604367  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:55.440875  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:55.454526  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:55.454602  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:55.490616  303486 cri.go:89] found id: ""
	I0920 19:08:55.490655  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.490664  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:55.490671  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:55.490735  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:55.530256  303486 cri.go:89] found id: ""
	I0920 19:08:55.530287  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.530296  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:55.530304  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:55.530357  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:55.565209  303486 cri.go:89] found id: ""
	I0920 19:08:55.565242  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.565253  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:55.565260  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:55.565319  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:55.599522  303486 cri.go:89] found id: ""
	I0920 19:08:55.599553  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.599563  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:55.599571  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:55.599634  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:55.634662  303486 cri.go:89] found id: ""
	I0920 19:08:55.634692  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.634700  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:55.634707  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:55.634759  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:55.670326  303486 cri.go:89] found id: ""
	I0920 19:08:55.670361  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.670372  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:55.670379  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:55.670434  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:55.702589  303486 cri.go:89] found id: ""
	I0920 19:08:55.702617  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.702625  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:55.702632  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:55.702694  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:55.737615  303486 cri.go:89] found id: ""
	I0920 19:08:55.737643  303486 logs.go:276] 0 containers: []
	W0920 19:08:55.737653  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:55.737667  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:55.737682  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:55.816827  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:55.816873  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:55.855521  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:55.855550  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:08:55.905002  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:55.905047  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:55.918292  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:55.918324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:55.987445  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:58.488566  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:08:58.503898  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:08:58.504001  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:08:58.539089  303486 cri.go:89] found id: ""
	I0920 19:08:58.539117  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.539127  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:08:58.539135  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:08:58.539199  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:08:58.576432  303486 cri.go:89] found id: ""
	I0920 19:08:58.576459  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.576467  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:08:58.576473  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:08:58.576542  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:08:58.613779  303486 cri.go:89] found id: ""
	I0920 19:08:58.613814  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.613825  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:08:58.613833  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:08:58.613932  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:08:58.648717  303486 cri.go:89] found id: ""
	I0920 19:08:58.648757  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.648768  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:08:58.648777  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:08:58.648845  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:08:58.681533  303486 cri.go:89] found id: ""
	I0920 19:08:58.681568  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.681585  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:08:58.681593  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:08:58.681647  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:08:58.714833  303486 cri.go:89] found id: ""
	I0920 19:08:58.714867  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.714877  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:08:58.714886  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:08:58.714951  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:08:58.755939  303486 cri.go:89] found id: ""
	I0920 19:08:58.755972  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.755980  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:08:58.755986  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:08:58.756037  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:08:58.793195  303486 cri.go:89] found id: ""
	I0920 19:08:58.793229  303486 logs.go:276] 0 containers: []
	W0920 19:08:58.793240  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:08:58.793252  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:08:58.793267  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:08:58.807903  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:08:58.807939  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:08:58.873993  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:08:58.874022  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:08:58.874042  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:08:56.283846  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.286474  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:56.963398  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.963513  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.606087  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:01.106199  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:08:58.955201  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:08:58.955249  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:08:58.994230  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:08:58.994265  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:01.548403  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:01.561467  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:01.561541  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:01.595339  303486 cri.go:89] found id: ""
	I0920 19:09:01.595374  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.595382  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:01.595388  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:01.595463  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:01.631995  303486 cri.go:89] found id: ""
	I0920 19:09:01.632033  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.632043  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:01.632051  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:01.632119  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:01.667556  303486 cri.go:89] found id: ""
	I0920 19:09:01.667586  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.667596  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:01.667604  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:01.667669  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:01.702678  303486 cri.go:89] found id: ""
	I0920 19:09:01.702708  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.702716  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:01.702723  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:01.702786  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:01.739953  303486 cri.go:89] found id: ""
	I0920 19:09:01.739987  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.739999  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:01.740008  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:01.740075  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:01.774188  303486 cri.go:89] found id: ""
	I0920 19:09:01.774222  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.774239  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:01.774249  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:01.774317  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:01.808885  303486 cri.go:89] found id: ""
	I0920 19:09:01.808916  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.808927  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:01.808935  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:01.808997  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:01.842357  303486 cri.go:89] found id: ""
	I0920 19:09:01.842394  303486 logs.go:276] 0 containers: []
	W0920 19:09:01.842404  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:01.842417  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:01.842433  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:01.881750  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:01.881782  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:01.932190  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:01.932236  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:01.946305  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:01.946337  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:02.020099  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:02.020127  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:02.020141  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:00.784428  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.284109  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:01.462613  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.962360  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:05.963735  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:03.605623  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:06.104994  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:04.601186  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:04.614292  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:04.614374  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:04.649579  303486 cri.go:89] found id: ""
	I0920 19:09:04.649611  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.649619  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:04.649625  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:04.649683  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:04.684039  303486 cri.go:89] found id: ""
	I0920 19:09:04.684076  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.684094  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:04.684108  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:04.684182  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:04.729130  303486 cri.go:89] found id: ""
	I0920 19:09:04.729166  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.729177  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:04.729186  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:04.729244  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:04.762646  303486 cri.go:89] found id: ""
	I0920 19:09:04.762682  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.762690  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:04.762697  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:04.762761  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:04.797492  303486 cri.go:89] found id: ""
	I0920 19:09:04.797518  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.797527  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:04.797533  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:04.797588  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:04.832780  303486 cri.go:89] found id: ""
	I0920 19:09:04.832813  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.832823  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:04.832831  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:04.832893  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:04.868489  303486 cri.go:89] found id: ""
	I0920 19:09:04.868526  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.868537  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:04.868546  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:04.868613  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:04.901115  303486 cri.go:89] found id: ""
	I0920 19:09:04.901156  303486 logs.go:276] 0 containers: []
	W0920 19:09:04.901164  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:04.901174  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:04.901186  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:04.952435  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:04.952482  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:04.966450  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:04.966481  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:05.035951  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:05.035977  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:05.035991  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:05.120961  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:05.121016  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:07.659497  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:07.672989  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:07.673062  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:07.708200  303486 cri.go:89] found id: ""
	I0920 19:09:07.708236  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.708247  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:07.708256  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:07.708320  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:07.742116  303486 cri.go:89] found id: ""
	I0920 19:09:07.742156  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.742166  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:07.742175  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:07.742231  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:07.774369  303486 cri.go:89] found id: ""
	I0920 19:09:07.774401  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.774410  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:07.774419  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:07.774485  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:07.811727  303486 cri.go:89] found id: ""
	I0920 19:09:07.811756  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.811763  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:07.811769  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:07.811825  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:07.849613  303486 cri.go:89] found id: ""
	I0920 19:09:07.849646  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.849655  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:07.849661  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:07.849715  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:07.884643  303486 cri.go:89] found id: ""
	I0920 19:09:07.884679  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.884690  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:07.884698  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:07.884770  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:07.920240  303486 cri.go:89] found id: ""
	I0920 19:09:07.920272  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.920283  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:07.920292  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:07.920371  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:07.954729  303486 cri.go:89] found id: ""
	I0920 19:09:07.954768  303486 logs.go:276] 0 containers: []
	W0920 19:09:07.954780  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:07.954792  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:07.954808  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:08.008679  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:08.008732  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:08.023637  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:08.023673  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:08.097298  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:08.097325  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:08.097340  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:08.173404  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:08.173444  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:05.783765  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.283642  302869 pod_ready.go:103] pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.462994  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.965062  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:08.106350  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.605138  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:12.605390  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:10.718224  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:10.732520  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:10.732593  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:10.766764  303486 cri.go:89] found id: ""
	I0920 19:09:10.766800  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.766811  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:10.766821  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:10.766887  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:10.800039  303486 cri.go:89] found id: ""
	I0920 19:09:10.800077  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.800087  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:10.800095  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:10.800157  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:10.833931  303486 cri.go:89] found id: ""
	I0920 19:09:10.833969  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.833979  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:10.833985  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:10.834057  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:10.867714  303486 cri.go:89] found id: ""
	I0920 19:09:10.867752  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.867763  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:10.867771  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:10.867840  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:10.903026  303486 cri.go:89] found id: ""
	I0920 19:09:10.903060  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.903068  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:10.903075  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:10.903131  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:10.936968  303486 cri.go:89] found id: ""
	I0920 19:09:10.937002  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.937013  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:10.937021  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:10.937089  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:10.973055  303486 cri.go:89] found id: ""
	I0920 19:09:10.973079  303486 logs.go:276] 0 containers: []
	W0920 19:09:10.973087  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:10.973093  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:10.973145  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:11.010283  303486 cri.go:89] found id: ""
	I0920 19:09:11.010310  303486 logs.go:276] 0 containers: []
	W0920 19:09:11.010321  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:11.010333  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:11.010352  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:11.025202  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:11.025239  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:11.104268  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:11.104295  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:11.104312  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:11.182281  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:11.182326  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:11.219296  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:11.219335  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:13.767833  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:13.780805  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:13.780890  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:13.822288  303486 cri.go:89] found id: ""
	I0920 19:09:13.822317  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.822327  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:13.822334  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:13.822388  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:13.862068  303486 cri.go:89] found id: ""
	I0920 19:09:13.862098  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.862106  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:13.862112  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:13.862163  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:13.898497  303486 cri.go:89] found id: ""
	I0920 19:09:13.898529  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.898540  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:13.898550  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:13.898618  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:13.935994  303486 cri.go:89] found id: ""
	I0920 19:09:13.936022  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.936030  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:13.936038  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:13.936105  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:10.277863  302869 pod_ready.go:82] duration metric: took 4m0.000569658s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" ...
	E0920 19:09:10.277919  302869 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qqhcw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 19:09:10.277965  302869 pod_ready.go:39] duration metric: took 4m13.052343801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:10.278003  302869 kubeadm.go:597] duration metric: took 4m21.10965758s to restartPrimaryControlPlane
	W0920 19:09:10.278125  302869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:09:10.278168  302869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:09:13.462752  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:15.962371  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:14.605565  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:17.112026  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:13.973764  303486 cri.go:89] found id: ""
	I0920 19:09:13.973801  303486 logs.go:276] 0 containers: []
	W0920 19:09:13.973812  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:13.973820  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:13.973898  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:14.009443  303486 cri.go:89] found id: ""
	I0920 19:09:14.009482  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.009494  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:14.009502  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:14.009577  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:14.045593  303486 cri.go:89] found id: ""
	I0920 19:09:14.045629  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.045639  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:14.045648  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:14.045714  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:14.086273  303486 cri.go:89] found id: ""
	I0920 19:09:14.086310  303486 logs.go:276] 0 containers: []
	W0920 19:09:14.086319  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:14.086330  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:14.086343  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:14.140730  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:14.140772  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:14.154198  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:14.154232  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:14.224716  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:14.224739  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:14.224754  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:14.302625  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:14.302665  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:16.840816  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:16.854905  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:16.855002  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:16.892994  303486 cri.go:89] found id: ""
	I0920 19:09:16.893028  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.893038  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:16.893045  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:16.893103  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:16.931265  303486 cri.go:89] found id: ""
	I0920 19:09:16.931293  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.931307  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:16.931313  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:16.931364  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:16.970085  303486 cri.go:89] found id: ""
	I0920 19:09:16.970119  303486 logs.go:276] 0 containers: []
	W0920 19:09:16.970129  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:16.970138  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:16.970189  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:17.003163  303486 cri.go:89] found id: ""
	I0920 19:09:17.003194  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.003206  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:17.003214  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:17.003282  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:17.040577  303486 cri.go:89] found id: ""
	I0920 19:09:17.040618  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.040633  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:17.040640  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:17.040706  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:17.073946  303486 cri.go:89] found id: ""
	I0920 19:09:17.073986  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.073995  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:17.074006  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:17.074066  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:17.111569  303486 cri.go:89] found id: ""
	I0920 19:09:17.111636  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.111648  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:17.111664  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:17.111730  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:17.148005  303486 cri.go:89] found id: ""
	I0920 19:09:17.148034  303486 logs.go:276] 0 containers: []
	W0920 19:09:17.148044  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:17.148056  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:17.148072  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:17.222281  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:17.222306  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:17.222324  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:17.297577  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:17.297619  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:17.334709  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:17.334740  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:17.386279  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:17.386320  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:17.962802  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.963289  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.605813  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:22.105024  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:19.901017  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:19.914489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:19.914571  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:19.955023  303486 cri.go:89] found id: ""
	I0920 19:09:19.955051  303486 logs.go:276] 0 containers: []
	W0920 19:09:19.955060  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:19.955067  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:19.955125  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:19.995536  303486 cri.go:89] found id: ""
	I0920 19:09:19.995575  303486 logs.go:276] 0 containers: []
	W0920 19:09:19.995585  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:19.995594  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:19.995650  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:20.031153  303486 cri.go:89] found id: ""
	I0920 19:09:20.031181  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.031190  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:20.031198  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:20.031266  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:20.064145  303486 cri.go:89] found id: ""
	I0920 19:09:20.064174  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.064190  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:20.064199  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:20.064256  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:20.098399  303486 cri.go:89] found id: ""
	I0920 19:09:20.098429  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.098440  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:20.098449  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:20.098505  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:20.138805  303486 cri.go:89] found id: ""
	I0920 19:09:20.138833  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.138843  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:20.138852  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:20.138914  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:20.183291  303486 cri.go:89] found id: ""
	I0920 19:09:20.183322  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.183333  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:20.183342  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:20.183406  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:20.220344  303486 cri.go:89] found id: ""
	I0920 19:09:20.220378  303486 logs.go:276] 0 containers: []
	W0920 19:09:20.220396  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:20.220409  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:20.220426  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:20.271043  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:20.271086  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:20.286724  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:20.286754  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:20.358233  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:20.358273  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:20.358291  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:20.439511  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:20.439568  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:22.982570  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:22.995384  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:22.995475  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:23.029031  303486 cri.go:89] found id: ""
	I0920 19:09:23.029069  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.029081  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:23.029091  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:23.029166  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:23.063291  303486 cri.go:89] found id: ""
	I0920 19:09:23.063325  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.063336  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:23.063343  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:23.063413  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:23.097494  303486 cri.go:89] found id: ""
	I0920 19:09:23.097525  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.097536  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:23.097545  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:23.097610  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:23.132169  303486 cri.go:89] found id: ""
	I0920 19:09:23.132197  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.132204  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:23.132211  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:23.132276  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:23.173651  303486 cri.go:89] found id: ""
	I0920 19:09:23.173682  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.173692  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:23.173700  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:23.173763  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:23.206098  303486 cri.go:89] found id: ""
	I0920 19:09:23.206135  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.206146  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:23.206155  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:23.206216  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:23.245422  303486 cri.go:89] found id: ""
	I0920 19:09:23.245466  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.245479  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:23.245489  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:23.245569  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:23.280326  303486 cri.go:89] found id: ""
	I0920 19:09:23.280357  303486 logs.go:276] 0 containers: []
	W0920 19:09:23.280365  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:23.280376  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:23.280390  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:23.330986  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:23.331034  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:23.344751  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:23.344788  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:23.420213  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:23.420239  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:23.420255  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:23.500449  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:23.500491  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:22.462590  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:24.962516  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:24.105502  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:26.110930  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:26.040050  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:26.056377  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:26.056463  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:26.094122  303486 cri.go:89] found id: ""
	I0920 19:09:26.094160  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.094170  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:26.094179  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:26.094246  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:26.129383  303486 cri.go:89] found id: ""
	I0920 19:09:26.129408  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.129415  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:26.129422  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:26.129472  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:26.163579  303486 cri.go:89] found id: ""
	I0920 19:09:26.163611  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.163621  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:26.163630  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:26.163699  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:26.208026  303486 cri.go:89] found id: ""
	I0920 19:09:26.208057  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.208065  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:26.208071  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:26.208138  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:26.245375  303486 cri.go:89] found id: ""
	I0920 19:09:26.245409  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.245421  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:26.245438  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:26.245500  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:26.280283  303486 cri.go:89] found id: ""
	I0920 19:09:26.280315  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.280326  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:26.280336  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:26.280397  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:26.314621  303486 cri.go:89] found id: ""
	I0920 19:09:26.314657  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.314670  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:26.314679  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:26.314773  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:26.347667  303486 cri.go:89] found id: ""
	I0920 19:09:26.347694  303486 logs.go:276] 0 containers: []
	W0920 19:09:26.347701  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:26.347711  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:26.347722  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:26.397221  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:26.397259  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:26.411126  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:26.411157  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:26.479631  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:26.479657  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:26.479686  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:26.555439  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:26.555477  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:26.962845  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:28.963560  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:28.605949  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:30.612349  303063 pod_ready.go:103] pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:32.104187  303063 pod_ready.go:82] duration metric: took 4m0.005608637s for pod "metrics-server-6867b74b74-2tnqc" in "kube-system" namespace to be "Ready" ...
	E0920 19:09:32.104213  303063 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 19:09:32.104224  303063 pod_ready.go:39] duration metric: took 4m5.679030104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:32.104241  303063 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:09:32.104273  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:32.104327  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:32.151755  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:32.151778  303063 cri.go:89] found id: ""
	I0920 19:09:32.151787  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:32.151866  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.157358  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:32.157426  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:32.201227  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:32.201255  303063 cri.go:89] found id: ""
	I0920 19:09:32.201263  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:32.201327  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.206508  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:32.206604  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:32.243509  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:32.243533  303063 cri.go:89] found id: ""
	I0920 19:09:32.243542  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:32.243595  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.247764  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:32.247836  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:32.283590  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:32.283627  303063 cri.go:89] found id: ""
	I0920 19:09:32.283637  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:32.283727  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.287826  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:32.287893  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:32.329071  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:32.329111  303063 cri.go:89] found id: ""
	I0920 19:09:32.329123  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:32.329196  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.333152  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:32.333236  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:32.372444  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:32.372474  303063 cri.go:89] found id: ""
	I0920 19:09:32.372485  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:32.372548  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.376414  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:32.376494  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:32.412244  303063 cri.go:89] found id: ""
	I0920 19:09:32.412280  303063 logs.go:276] 0 containers: []
	W0920 19:09:32.412291  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:32.412299  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:32.412352  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:32.449451  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:32.449472  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:32.449477  303063 cri.go:89] found id: ""
	I0920 19:09:32.449485  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:32.449544  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.454960  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:32.459688  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:32.459720  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:32.599208  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:32.599241  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:32.656960  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:32.657000  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:32.703259  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:32.703308  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:32.769218  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:32.769260  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:29.096877  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:29.110081  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:29.110170  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:29.152570  303486 cri.go:89] found id: ""
	I0920 19:09:29.152598  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.152608  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:29.152616  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:29.152689  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:29.188596  303486 cri.go:89] found id: ""
	I0920 19:09:29.188627  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.188638  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:29.188645  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:29.188713  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:29.228789  303486 cri.go:89] found id: ""
	I0920 19:09:29.228831  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.228841  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:29.228850  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:29.228913  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:29.260013  303486 cri.go:89] found id: ""
	I0920 19:09:29.260040  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.260048  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:29.260054  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:29.260105  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:29.293373  303486 cri.go:89] found id: ""
	I0920 19:09:29.293401  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.293411  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:29.293418  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:29.293487  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:29.325860  303486 cri.go:89] found id: ""
	I0920 19:09:29.325898  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.325925  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:29.325935  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:29.326027  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:29.358873  303486 cri.go:89] found id: ""
	I0920 19:09:29.358909  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.358921  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:29.358930  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:29.358994  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:29.392029  303486 cri.go:89] found id: ""
	I0920 19:09:29.392057  303486 logs.go:276] 0 containers: []
	W0920 19:09:29.392067  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:29.392080  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:29.392095  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:29.467460  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:29.467504  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:29.508258  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:29.508298  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:29.559238  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:29.559274  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:29.574233  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:29.574264  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:29.649318  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:32.150539  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:32.168442  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:32.168527  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:32.210069  303486 cri.go:89] found id: ""
	I0920 19:09:32.210103  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.210120  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:09:32.210129  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:32.210191  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:32.243468  303486 cri.go:89] found id: ""
	I0920 19:09:32.243501  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.243511  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:09:32.243519  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:32.243586  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:32.275958  303486 cri.go:89] found id: ""
	I0920 19:09:32.275988  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.275996  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:09:32.276003  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:32.276056  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:32.312560  303486 cri.go:89] found id: ""
	I0920 19:09:32.312598  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.312609  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:09:32.312620  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:32.312695  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:32.347157  303486 cri.go:89] found id: ""
	I0920 19:09:32.347185  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.347193  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:09:32.347200  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:32.347264  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:32.382787  303486 cri.go:89] found id: ""
	I0920 19:09:32.382820  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.382832  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:09:32.382841  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:32.382898  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:32.416182  303486 cri.go:89] found id: ""
	I0920 19:09:32.416216  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.416226  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:32.416234  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:09:32.416297  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:09:32.448863  303486 cri.go:89] found id: ""
	I0920 19:09:32.448895  303486 logs.go:276] 0 containers: []
	W0920 19:09:32.448906  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:09:32.448919  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:32.448934  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:32.501882  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:32.501934  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:32.517984  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:32.518014  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:09:32.588517  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:09:32.588547  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:32.588560  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:32.671869  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:09:32.671921  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:35.211780  303486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:35.225476  303486 kubeadm.go:597] duration metric: took 4m2.827297435s to restartPrimaryControlPlane
	W0920 19:09:35.225582  303486 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:09:35.225618  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:09:35.686956  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:35.701803  303486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:09:35.712572  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:09:35.722867  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:09:35.722894  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:09:35.722948  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:09:35.732295  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:09:35.732358  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:09:35.741569  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:09:35.750515  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:09:35.750577  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:09:35.760469  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:09:35.770207  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:09:35.770284  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:09:35.780121  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:09:35.789887  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:09:35.789974  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:09:35.800914  303486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:09:35.871635  303486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:09:35.871691  303486 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:09:36.021411  303486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:09:36.021565  303486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:09:36.021773  303486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:09:36.217540  303486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:09:31.462557  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:33.463284  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:35.964501  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:36.723149  302869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.444941441s)
	I0920 19:09:36.723244  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:36.740763  302869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:09:36.751727  302869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:09:36.762710  302869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:09:36.762736  302869 kubeadm.go:157] found existing configuration files:
	
	I0920 19:09:36.762793  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:09:36.773454  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:09:36.773536  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:09:36.784738  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:09:36.794740  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:09:36.794818  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:09:36.805727  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:09:36.818253  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:09:36.818329  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:09:36.831210  302869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:09:36.842838  302869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:09:36.842914  302869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:09:36.853306  302869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:09:36.903121  302869 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:09:36.903285  302869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:09:37.025789  302869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:09:37.025969  302869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:09:37.026110  302869 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:09:37.034613  302869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:09:36.219542  303486 out.go:235]   - Generating certificates and keys ...
	I0920 19:09:36.219684  303486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:09:36.219769  303486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:09:36.219892  303486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:09:36.219973  303486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:09:36.220090  303486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:09:36.220181  303486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:09:36.220302  303486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:09:36.220414  303486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:09:36.220530  303486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:09:36.220626  303486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:09:36.220691  303486 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:09:36.220767  303486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:09:36.377012  303486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:09:36.706154  303486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:09:36.907341  303486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:09:37.091990  303486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:09:37.122813  303486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:09:37.124422  303486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:09:37.124531  303486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:09:37.277461  303486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:09:33.294289  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:33.294346  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:33.362317  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:33.362364  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:33.375712  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:33.375747  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:33.411136  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:33.411168  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:33.445649  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:33.445690  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:33.478869  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:33.478898  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:33.529433  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:33.529480  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:33.570515  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:33.570560  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.107490  303063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:36.124979  303063 api_server.go:72] duration metric: took 4m17.429642296s to wait for apiserver process to appear ...
	I0920 19:09:36.125014  303063 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:09:36.125069  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:36.125145  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:36.181962  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:36.181990  303063 cri.go:89] found id: ""
	I0920 19:09:36.182001  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:36.182061  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.186792  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:36.186876  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:36.235963  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:36.235993  303063 cri.go:89] found id: ""
	I0920 19:09:36.236003  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:36.236066  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.241177  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:36.241321  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:36.288324  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.288353  303063 cri.go:89] found id: ""
	I0920 19:09:36.288361  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:36.288415  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.293328  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:36.293413  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:36.335126  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:36.335153  303063 cri.go:89] found id: ""
	I0920 19:09:36.335163  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:36.335226  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.339400  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:36.339470  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:36.375555  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:36.375582  303063 cri.go:89] found id: ""
	I0920 19:09:36.375592  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:36.375657  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.379679  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:36.379753  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:36.415398  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:36.415424  303063 cri.go:89] found id: ""
	I0920 19:09:36.415434  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:36.415495  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.420183  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:36.420260  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:36.462018  303063 cri.go:89] found id: ""
	I0920 19:09:36.462049  303063 logs.go:276] 0 containers: []
	W0920 19:09:36.462060  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:36.462068  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:36.462129  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:36.515520  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:36.515551  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:36.515557  303063 cri.go:89] found id: ""
	I0920 19:09:36.515567  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:36.515628  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.520140  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:36.524197  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:36.524222  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:36.589535  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:36.589570  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:36.628836  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:36.628865  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:36.667614  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:36.667654  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:37.164164  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:37.164222  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:37.253505  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:37.253550  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:37.272704  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:37.272742  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:37.315827  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:37.315869  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:37.360449  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:37.360479  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:37.428225  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:37.428270  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:37.469766  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:37.469795  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:37.524517  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:37.524553  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:37.652128  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:37.652162  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:37.036846  302869 out.go:235]   - Generating certificates and keys ...
	I0920 19:09:37.036956  302869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:09:37.037061  302869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:09:37.037194  302869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:09:37.037284  302869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:09:37.037386  302869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:09:37.037462  302869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:09:37.037546  302869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:09:37.037635  302869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:09:37.037734  302869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:09:37.037847  302869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:09:37.037918  302869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:09:37.037995  302869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:09:37.116270  302869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:09:37.615537  302869 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:09:37.907479  302869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:09:38.090167  302869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:09:38.209430  302869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:09:38.209780  302869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:09:38.212626  302869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:09:37.279714  303486 out.go:235]   - Booting up control plane ...
	I0920 19:09:37.279861  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:09:37.288448  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:09:37.289724  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:09:37.290822  303486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:09:37.294106  303486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:09:38.214873  302869 out.go:235]   - Booting up control plane ...
	I0920 19:09:38.214994  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:09:38.215102  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:09:38.215199  302869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:09:38.232798  302869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:09:38.238716  302869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:09:38.238784  302869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:09:38.370841  302869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:09:38.371037  302869 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:09:38.463252  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:40.463322  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:40.212781  303063 api_server.go:253] Checking apiserver healthz at https://192.168.50.230:8444/healthz ...
	I0920 19:09:40.217868  303063 api_server.go:279] https://192.168.50.230:8444/healthz returned 200:
	ok
	I0920 19:09:40.219021  303063 api_server.go:141] control plane version: v1.31.1
	I0920 19:09:40.219044  303063 api_server.go:131] duration metric: took 4.094023157s to wait for apiserver health ...
	I0920 19:09:40.219053  303063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:09:40.219077  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:09:40.219128  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:09:40.264337  303063 cri.go:89] found id: "f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:40.264365  303063 cri.go:89] found id: ""
	I0920 19:09:40.264376  303063 logs.go:276] 1 containers: [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f]
	I0920 19:09:40.264434  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.270143  303063 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:09:40.270222  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:09:40.321696  303063 cri.go:89] found id: "5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:40.321723  303063 cri.go:89] found id: ""
	I0920 19:09:40.321733  303063 logs.go:276] 1 containers: [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281]
	I0920 19:09:40.321799  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.329068  303063 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:09:40.329149  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:09:40.387241  303063 cri.go:89] found id: "88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:40.387329  303063 cri.go:89] found id: ""
	I0920 19:09:40.387357  303063 logs.go:276] 1 containers: [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f]
	I0920 19:09:40.387427  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.392896  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:09:40.392975  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:09:40.429173  303063 cri.go:89] found id: "25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:40.429200  303063 cri.go:89] found id: ""
	I0920 19:09:40.429210  303063 logs.go:276] 1 containers: [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862]
	I0920 19:09:40.429284  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.434102  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:09:40.434179  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:09:40.480569  303063 cri.go:89] found id: "3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:40.480598  303063 cri.go:89] found id: ""
	I0920 19:09:40.480607  303063 logs.go:276] 1 containers: [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4]
	I0920 19:09:40.480669  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.485821  303063 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:09:40.485935  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:09:40.531502  303063 cri.go:89] found id: "9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:40.531543  303063 cri.go:89] found id: ""
	I0920 19:09:40.531554  303063 logs.go:276] 1 containers: [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba]
	I0920 19:09:40.531613  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.535699  303063 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:09:40.535769  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:09:40.569788  303063 cri.go:89] found id: ""
	I0920 19:09:40.569823  303063 logs.go:276] 0 containers: []
	W0920 19:09:40.569835  303063 logs.go:278] No container was found matching "kindnet"
	I0920 19:09:40.569842  303063 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:09:40.569928  303063 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:09:40.604668  303063 cri.go:89] found id: "a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:40.604703  303063 cri.go:89] found id: "0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:40.604710  303063 cri.go:89] found id: ""
	I0920 19:09:40.604721  303063 logs.go:276] 2 containers: [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85]
	I0920 19:09:40.604790  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.608948  303063 ssh_runner.go:195] Run: which crictl
	I0920 19:09:40.613331  303063 logs.go:123] Gathering logs for kube-apiserver [f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f] ...
	I0920 19:09:40.613360  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9971978cdd0763b9a76e3fe87d705167ab87ce45178801f4cd07490c5e8397f"
	I0920 19:09:40.657680  303063 logs.go:123] Gathering logs for kube-scheduler [25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862] ...
	I0920 19:09:40.657726  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25594230a0a82263b7bb5b52ec80e061210c097c86fa90ca1902cb2b74a7c862"
	I0920 19:09:40.698087  303063 logs.go:123] Gathering logs for kube-controller-manager [9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba] ...
	I0920 19:09:40.698125  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a3d66bde4ebb230782cc904202763e22768132b216f33f909984ca61b0afdba"
	I0920 19:09:40.753643  303063 logs.go:123] Gathering logs for storage-provisioner [a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5] ...
	I0920 19:09:40.753683  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a77d8a3964187c08023fccdb188b45917d93c33e66262cc8201a6c0a551ac7d5"
	I0920 19:09:40.791741  303063 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:09:40.791790  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:09:41.176451  303063 logs.go:123] Gathering logs for container status ...
	I0920 19:09:41.176497  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:09:41.226352  303063 logs.go:123] Gathering logs for kubelet ...
	I0920 19:09:41.226386  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:09:41.307652  303063 logs.go:123] Gathering logs for dmesg ...
	I0920 19:09:41.307694  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:09:41.323271  303063 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:09:41.323307  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:09:41.441151  303063 logs.go:123] Gathering logs for etcd [5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281] ...
	I0920 19:09:41.441195  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5422e85be20622614b34ecd8a628515fe2c18256ca9692934dc713f9cf0ca281"
	I0920 19:09:41.495438  303063 logs.go:123] Gathering logs for coredns [88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f] ...
	I0920 19:09:41.495494  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88f0364540083a69de995c4a13fffee979f89ae8bff81275b28226d3a144cc7f"
	I0920 19:09:41.543879  303063 logs.go:123] Gathering logs for kube-proxy [3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4] ...
	I0920 19:09:41.543930  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3591419c15d21154c2735daae6b7f1ab823f99f6851016bc2ba341f10f4ff8c4"
	I0920 19:09:41.595010  303063 logs.go:123] Gathering logs for storage-provisioner [0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85] ...
	I0920 19:09:41.595055  303063 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d20ef881ab96b7f8f0fcd09f89795fa95f38c3351bbe346833dbb41ed252e85"
	I0920 19:09:44.140048  303063 system_pods.go:59] 8 kube-system pods found
	I0920 19:09:44.140078  303063 system_pods.go:61] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running
	I0920 19:09:44.140083  303063 system_pods.go:61] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running
	I0920 19:09:44.140087  303063 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running
	I0920 19:09:44.140091  303063 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running
	I0920 19:09:44.140094  303063 system_pods.go:61] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running
	I0920 19:09:44.140097  303063 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running
	I0920 19:09:44.140104  303063 system_pods.go:61] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:44.140108  303063 system_pods.go:61] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running
	I0920 19:09:44.140115  303063 system_pods.go:74] duration metric: took 3.921056539s to wait for pod list to return data ...
	I0920 19:09:44.140122  303063 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:09:44.143381  303063 default_sa.go:45] found service account: "default"
	I0920 19:09:44.143409  303063 default_sa.go:55] duration metric: took 3.281031ms for default service account to be created ...
	I0920 19:09:44.143422  303063 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:09:44.148161  303063 system_pods.go:86] 8 kube-system pods found
	I0920 19:09:44.148191  303063 system_pods.go:89] "coredns-7c65d6cfc9-427x2" [48b87f9f-4697-4d76-aed1-3d54720172c6] Running
	I0920 19:09:44.148199  303063 system_pods.go:89] "etcd-default-k8s-diff-port-612312" [8537d9ce-87da-425d-a90a-eb4f30f9d23f] Running
	I0920 19:09:44.148205  303063 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-612312" [d5705298-0b1f-4f95-bf5d-c795e09dd30e] Running
	I0920 19:09:44.148212  303063 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-612312" [99b04f73-91d6-42d4-8def-09b65ca142f6] Running
	I0920 19:09:44.148216  303063 system_pods.go:89] "kube-proxy-zp8l5" [9fe30e51-ef3f-4448-916a-8ad75832b207] Running
	I0920 19:09:44.148221  303063 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-612312" [4b9dfd39-5b60-4a90-9edf-0af7e99f0ac6] Running
	I0920 19:09:44.148230  303063 system_pods.go:89] "metrics-server-6867b74b74-2tnqc" [35ce9a11-e606-41da-84bf-b3c5e9a18245] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:44.148236  303063 system_pods.go:89] "storage-provisioner" [6c9dbdcb-65f3-4aeb-9b2e-e7f5b4c1f502] Running
	I0920 19:09:44.148248  303063 system_pods.go:126] duration metric: took 4.819429ms to wait for k8s-apps to be running ...
	I0920 19:09:44.148260  303063 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:09:44.148312  303063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:44.163839  303063 system_svc.go:56] duration metric: took 15.568956ms WaitForService to wait for kubelet
	I0920 19:09:44.163882  303063 kubeadm.go:582] duration metric: took 4m25.468555427s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:09:44.163911  303063 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:09:44.167622  303063 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:09:44.167656  303063 node_conditions.go:123] node cpu capacity is 2
	I0920 19:09:44.167671  303063 node_conditions.go:105] duration metric: took 3.752828ms to run NodePressure ...
	I0920 19:09:44.167690  303063 start.go:241] waiting for startup goroutines ...
	I0920 19:09:44.167700  303063 start.go:246] waiting for cluster config update ...
	I0920 19:09:44.167716  303063 start.go:255] writing updated cluster config ...
	I0920 19:09:44.168208  303063 ssh_runner.go:195] Run: rm -f paused
	I0920 19:09:44.223860  303063 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:09:44.226056  303063 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-612312" cluster and "default" namespace by default
	I0920 19:09:39.373109  302869 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002236347s
	I0920 19:09:39.373229  302869 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:09:44.375102  302869 kubeadm.go:310] [api-check] The API server is healthy after 5.001998039s
	I0920 19:09:44.405405  302869 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:09:44.428364  302869 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:09:44.470575  302869 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:09:44.470870  302869 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-339897 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:09:44.505469  302869 kubeadm.go:310] [bootstrap-token] Using token: v5zzut.gmtb3j9b0yqqwvtv
	I0920 19:09:44.507561  302869 out.go:235]   - Configuring RBAC rules ...
	I0920 19:09:44.507721  302869 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:09:44.522092  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:09:44.555238  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:09:44.559971  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:09:44.566954  302869 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:09:44.574111  302869 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:09:44.788900  302869 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:09:45.229897  302869 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:09:45.788397  302869 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:09:45.789415  302869 kubeadm.go:310] 
	I0920 19:09:45.789504  302869 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:09:45.789516  302869 kubeadm.go:310] 
	I0920 19:09:45.789614  302869 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:09:45.789631  302869 kubeadm.go:310] 
	I0920 19:09:45.789664  302869 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:09:45.789804  302869 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:09:45.789897  302869 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:09:45.789930  302869 kubeadm.go:310] 
	I0920 19:09:45.790043  302869 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:09:45.790061  302869 kubeadm.go:310] 
	I0920 19:09:45.790130  302869 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:09:45.790145  302869 kubeadm.go:310] 
	I0920 19:09:45.790203  302869 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:09:45.790269  302869 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:09:45.790330  302869 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:09:45.790337  302869 kubeadm.go:310] 
	I0920 19:09:45.790438  302869 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:09:45.790549  302869 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:09:45.790563  302869 kubeadm.go:310] 
	I0920 19:09:45.790664  302869 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v5zzut.gmtb3j9b0yqqwvtv \
	I0920 19:09:45.790792  302869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 19:09:45.790823  302869 kubeadm.go:310] 	--control-plane 
	I0920 19:09:45.790835  302869 kubeadm.go:310] 
	I0920 19:09:45.790962  302869 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:09:45.790977  302869 kubeadm.go:310] 
	I0920 19:09:45.791045  302869 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v5zzut.gmtb3j9b0yqqwvtv \
	I0920 19:09:45.791164  302869 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 19:09:45.792825  302869 kubeadm.go:310] W0920 19:09:36.880654    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:09:45.793122  302869 kubeadm.go:310] W0920 19:09:36.881516    2555 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:09:45.793273  302869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:09:45.793317  302869 cni.go:84] Creating CNI manager for ""
	I0920 19:09:45.793331  302869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:09:45.795282  302869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:09:42.464639  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:44.464714  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:45.796961  302869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:09:45.808972  302869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:09:45.831122  302869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:09:45.831174  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:45.831208  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-339897 minikube.k8s.io/updated_at=2024_09_20T19_09_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=embed-certs-339897 minikube.k8s.io/primary=true
	I0920 19:09:46.057677  302869 ops.go:34] apiserver oom_adj: -16
	I0920 19:09:46.057798  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:46.558670  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:47.057876  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:47.558913  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:48.057925  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:48.557985  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:49.057925  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:49.558500  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:50.058507  302869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:09:50.198032  302869 kubeadm.go:1113] duration metric: took 4.366908909s to wait for elevateKubeSystemPrivileges
	I0920 19:09:50.198074  302869 kubeadm.go:394] duration metric: took 5m1.087269263s to StartCluster
	I0920 19:09:50.198100  302869 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:09:50.198209  302869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:09:50.200736  302869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:09:50.201068  302869 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:09:50.201327  302869 config.go:182] Loaded profile config "embed-certs-339897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:09:50.201393  302869 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:09:50.201482  302869 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-339897"
	I0920 19:09:50.201502  302869 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-339897"
	W0920 19:09:50.201512  302869 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:09:50.201542  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.202007  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202050  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.202261  302869 addons.go:69] Setting default-storageclass=true in profile "embed-certs-339897"
	I0920 19:09:50.202285  302869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-339897"
	I0920 19:09:50.202285  302869 addons.go:69] Setting metrics-server=true in profile "embed-certs-339897"
	I0920 19:09:50.202311  302869 addons.go:234] Setting addon metrics-server=true in "embed-certs-339897"
	W0920 19:09:50.202319  302869 addons.go:243] addon metrics-server should already be in state true
	I0920 19:09:50.202349  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.202688  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202752  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.202755  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.202793  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.203329  302869 out.go:177] * Verifying Kubernetes components...
	I0920 19:09:50.204655  302869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:09:50.224081  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46289
	I0920 19:09:50.224334  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45801
	I0920 19:09:50.224337  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0920 19:09:50.224579  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.224941  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.225039  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.225214  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225231  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.225643  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.225682  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225699  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.225798  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.225818  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.226018  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.226080  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.226564  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.226594  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.226777  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.227444  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.227494  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.229747  302869 addons.go:234] Setting addon default-storageclass=true in "embed-certs-339897"
	W0920 19:09:50.229771  302869 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:09:50.229803  302869 host.go:66] Checking if "embed-certs-339897" exists ...
	I0920 19:09:50.230208  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.230261  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.243865  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I0920 19:09:50.244292  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.244828  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.244851  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.245080  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0920 19:09:50.245252  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.245714  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.245810  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.246303  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.246323  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.246661  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.246806  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.248050  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.248671  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.250223  302869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:09:50.250319  302869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:09:46.963562  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:48.965266  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:50.250485  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38237
	I0920 19:09:50.250954  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.251418  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.251435  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.251535  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:09:50.251556  302869 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:09:50.251594  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.251680  302869 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:09:50.251693  302869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:09:50.251706  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.251889  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.252452  302869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:09:50.252502  302869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:09:50.255422  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.255692  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.255902  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.255928  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.256372  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.256396  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.256442  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.256663  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.256697  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.256840  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.256868  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.257066  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.257089  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.257268  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.272424  302869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I0920 19:09:50.273107  302869 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:09:50.273729  302869 main.go:141] libmachine: Using API Version  1
	I0920 19:09:50.273746  302869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:09:50.274208  302869 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:09:50.274402  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetState
	I0920 19:09:50.276189  302869 main.go:141] libmachine: (embed-certs-339897) Calling .DriverName
	I0920 19:09:50.276384  302869 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:09:50.276399  302869 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:09:50.276417  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHHostname
	I0920 19:09:50.279319  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.279718  302869 main.go:141] libmachine: (embed-certs-339897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:b1:41", ip: ""} in network mk-embed-certs-339897: {Iface:virbr4 ExpiryTime:2024-09-20 20:04:34 +0000 UTC Type:0 Mac:52:54:00:dc:b1:41 Iaid: IPaddr:192.168.72.72 Prefix:24 Hostname:embed-certs-339897 Clientid:01:52:54:00:dc:b1:41}
	I0920 19:09:50.279747  302869 main.go:141] libmachine: (embed-certs-339897) DBG | domain embed-certs-339897 has defined IP address 192.168.72.72 and MAC address 52:54:00:dc:b1:41 in network mk-embed-certs-339897
	I0920 19:09:50.279850  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHPort
	I0920 19:09:50.280044  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHKeyPath
	I0920 19:09:50.280305  302869 main.go:141] libmachine: (embed-certs-339897) Calling .GetSSHUsername
	I0920 19:09:50.280481  302869 sshutil.go:53] new ssh client: &{IP:192.168.72.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/embed-certs-339897/id_rsa Username:docker}
	I0920 19:09:50.407262  302869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:09:50.455491  302869 node_ready.go:35] waiting up to 6m0s for node "embed-certs-339897" to be "Ready" ...
	I0920 19:09:50.503634  302869 node_ready.go:49] node "embed-certs-339897" has status "Ready":"True"
	I0920 19:09:50.503663  302869 node_ready.go:38] duration metric: took 48.13478ms for node "embed-certs-339897" to be "Ready" ...
	I0920 19:09:50.503672  302869 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:50.532327  302869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:50.589446  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:09:50.589482  302869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:09:50.613277  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:09:50.619161  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:09:50.662197  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:09:50.662232  302869 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:09:50.753073  302869 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:09:50.753106  302869 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:09:50.842679  302869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:09:51.790932  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171721983s)
	I0920 19:09:51.790997  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791012  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791029  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.177708427s)
	I0920 19:09:51.791073  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791089  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791380  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.791438  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.791444  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.791483  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791380  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.791527  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.791541  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.791556  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.791416  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.791493  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.793128  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.793159  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.793177  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.793149  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:51.793148  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.793208  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:51.820906  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:51.820939  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:51.821290  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:51.821312  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.003182  302869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.160452395s)
	I0920 19:09:52.003247  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:52.003261  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:52.003593  302869 main.go:141] libmachine: (embed-certs-339897) DBG | Closing plugin on server side
	I0920 19:09:52.003600  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:52.003622  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.003632  302869 main.go:141] libmachine: Making call to close driver server
	I0920 19:09:52.003640  302869 main.go:141] libmachine: (embed-certs-339897) Calling .Close
	I0920 19:09:52.003985  302869 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:09:52.004003  302869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:09:52.004017  302869 addons.go:475] Verifying addon metrics-server=true in "embed-certs-339897"
	I0920 19:09:52.006444  302869 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 19:09:52.008313  302869 addons.go:510] duration metric: took 1.806914162s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 19:09:52.539578  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:53.539999  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:53.540026  302869 pod_ready.go:82] duration metric: took 3.007669334s for pod "coredns-7c65d6cfc9-2zlww" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:53.540036  302869 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:51.463340  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:53.963461  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:55.547997  302869 pod_ready.go:103] pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:57.552686  302869 pod_ready.go:93] pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.552714  302869 pod_ready.go:82] duration metric: took 4.01267227s for pod "coredns-7c65d6cfc9-7fxdr" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.552724  302869 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.560885  302869 pod_ready.go:93] pod "etcd-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.560910  302869 pod_ready.go:82] duration metric: took 8.179457ms for pod "etcd-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.560919  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.577414  302869 pod_ready.go:93] pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.577441  302869 pod_ready.go:82] duration metric: took 16.515029ms for pod "kube-apiserver-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.577451  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.588547  302869 pod_ready.go:93] pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.588574  302869 pod_ready.go:82] duration metric: took 11.116334ms for pod "kube-controller-manager-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.588583  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-whcbh" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.594919  302869 pod_ready.go:93] pod "kube-proxy-whcbh" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.594942  302869 pod_ready.go:82] duration metric: took 6.35266ms for pod "kube-proxy-whcbh" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.594951  302869 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.943559  302869 pod_ready.go:93] pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace has status "Ready":"True"
	I0920 19:09:57.943585  302869 pod_ready.go:82] duration metric: took 348.626555ms for pod "kube-scheduler-embed-certs-339897" in "kube-system" namespace to be "Ready" ...
	I0920 19:09:57.943592  302869 pod_ready.go:39] duration metric: took 7.439908161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:09:57.943609  302869 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:09:57.943662  302869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:09:57.959537  302869 api_server.go:72] duration metric: took 7.758426976s to wait for apiserver process to appear ...
	I0920 19:09:57.959567  302869 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:09:57.959594  302869 api_server.go:253] Checking apiserver healthz at https://192.168.72.72:8443/healthz ...
	I0920 19:09:57.964316  302869 api_server.go:279] https://192.168.72.72:8443/healthz returned 200:
	ok
	I0920 19:09:57.965668  302869 api_server.go:141] control plane version: v1.31.1
	I0920 19:09:57.965690  302869 api_server.go:131] duration metric: took 6.115168ms to wait for apiserver health ...
	I0920 19:09:57.965697  302869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:09:58.148306  302869 system_pods.go:59] 9 kube-system pods found
	I0920 19:09:58.148339  302869 system_pods.go:61] "coredns-7c65d6cfc9-2zlww" [5eb78763-7160-4ae9-80c3-87a82a6dc992] Running
	I0920 19:09:58.148345  302869 system_pods.go:61] "coredns-7c65d6cfc9-7fxdr" [85a441e8-39b0-4623-a7bd-eebbd1574f20] Running
	I0920 19:09:58.148349  302869 system_pods.go:61] "etcd-embed-certs-339897" [150a2276-3896-498e-89f7-44cf4554da69] Running
	I0920 19:09:58.148352  302869 system_pods.go:61] "kube-apiserver-embed-certs-339897" [396520a3-2567-4267-852d-9f9525dd5e01] Running
	I0920 19:09:58.148356  302869 system_pods.go:61] "kube-controller-manager-embed-certs-339897" [7f64ad97-3230-4cf5-92ad-cf58ef88a2b0] Running
	I0920 19:09:58.148359  302869 system_pods.go:61] "kube-proxy-whcbh" [3a2dbb60-1a51-4874-98b8-75d1a35b0512] Running
	I0920 19:09:58.148361  302869 system_pods.go:61] "kube-scheduler-embed-certs-339897" [31214783-f8cf-46c6-a305-fde7692dfc72] Running
	I0920 19:09:58.148367  302869 system_pods.go:61] "metrics-server-6867b74b74-tw9fh" [8366591d-8916-4b9f-be8a-64ddc185f576] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:58.148371  302869 system_pods.go:61] "storage-provisioner" [8bcc482a-6905-436a-8d90-7eee9ba18f8b] Running
	I0920 19:09:58.148381  302869 system_pods.go:74] duration metric: took 182.677921ms to wait for pod list to return data ...
	I0920 19:09:58.148387  302869 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:09:58.344318  302869 default_sa.go:45] found service account: "default"
	I0920 19:09:58.344346  302869 default_sa.go:55] duration metric: took 195.952788ms for default service account to be created ...
	I0920 19:09:58.344357  302869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:09:58.547996  302869 system_pods.go:86] 9 kube-system pods found
	I0920 19:09:58.548034  302869 system_pods.go:89] "coredns-7c65d6cfc9-2zlww" [5eb78763-7160-4ae9-80c3-87a82a6dc992] Running
	I0920 19:09:58.548043  302869 system_pods.go:89] "coredns-7c65d6cfc9-7fxdr" [85a441e8-39b0-4623-a7bd-eebbd1574f20] Running
	I0920 19:09:58.548048  302869 system_pods.go:89] "etcd-embed-certs-339897" [150a2276-3896-498e-89f7-44cf4554da69] Running
	I0920 19:09:58.548054  302869 system_pods.go:89] "kube-apiserver-embed-certs-339897" [396520a3-2567-4267-852d-9f9525dd5e01] Running
	I0920 19:09:58.548060  302869 system_pods.go:89] "kube-controller-manager-embed-certs-339897" [7f64ad97-3230-4cf5-92ad-cf58ef88a2b0] Running
	I0920 19:09:58.548066  302869 system_pods.go:89] "kube-proxy-whcbh" [3a2dbb60-1a51-4874-98b8-75d1a35b0512] Running
	I0920 19:09:58.548070  302869 system_pods.go:89] "kube-scheduler-embed-certs-339897" [31214783-f8cf-46c6-a305-fde7692dfc72] Running
	I0920 19:09:58.548079  302869 system_pods.go:89] "metrics-server-6867b74b74-tw9fh" [8366591d-8916-4b9f-be8a-64ddc185f576] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:09:58.548085  302869 system_pods.go:89] "storage-provisioner" [8bcc482a-6905-436a-8d90-7eee9ba18f8b] Running
	I0920 19:09:58.548099  302869 system_pods.go:126] duration metric: took 203.735171ms to wait for k8s-apps to be running ...
	I0920 19:09:58.548108  302869 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:09:58.548165  302869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:09:58.563235  302869 system_svc.go:56] duration metric: took 15.107997ms WaitForService to wait for kubelet
	I0920 19:09:58.563274  302869 kubeadm.go:582] duration metric: took 8.362165276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:09:58.563299  302869 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:09:58.744093  302869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:09:58.744155  302869 node_conditions.go:123] node cpu capacity is 2
	I0920 19:09:58.744171  302869 node_conditions.go:105] duration metric: took 180.864643ms to run NodePressure ...
	I0920 19:09:58.744186  302869 start.go:241] waiting for startup goroutines ...
	I0920 19:09:58.744196  302869 start.go:246] waiting for cluster config update ...
	I0920 19:09:58.744220  302869 start.go:255] writing updated cluster config ...
	I0920 19:09:58.744526  302869 ssh_runner.go:195] Run: rm -f paused
	I0920 19:09:58.794946  302869 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:09:58.797418  302869 out.go:177] * Done! kubectl is now configured to use "embed-certs-339897" cluster and "default" namespace by default
	I0920 19:09:56.464024  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:09:58.464282  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:00.963419  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:02.963506  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:04.963804  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:07.463546  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:09.962855  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:11.963447  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:13.964915  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:17.296411  303486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:10:17.296525  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:17.296765  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:16.462968  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:18.963906  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:22.297630  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:22.297923  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:21.463201  302538 pod_ready.go:103] pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace has status "Ready":"False"
	I0920 19:10:22.457112  302538 pod_ready.go:82] duration metric: took 4m0.000881628s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" ...
	E0920 19:10:22.457161  302538 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7xpgm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 19:10:22.457180  302538 pod_ready.go:39] duration metric: took 4m14.047738931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:10:22.457208  302538 kubeadm.go:597] duration metric: took 4m21.028566787s to restartPrimaryControlPlane
	W0920 19:10:22.457265  302538 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 19:10:22.457291  302538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:10:32.298239  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:32.298525  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:48.632052  302538 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.17473972s)
	I0920 19:10:48.632143  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:10:48.648205  302538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:10:48.658969  302538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:10:48.668954  302538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:10:48.668981  302538 kubeadm.go:157] found existing configuration files:
	
	I0920 19:10:48.669035  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:10:48.678138  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:10:48.678229  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:10:48.687960  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:10:48.697578  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:10:48.697644  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:10:48.707573  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:10:48.717059  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:10:48.717123  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:10:48.727642  302538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:10:48.737599  302538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:10:48.737681  302538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:10:48.749542  302538 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:10:48.795278  302538 kubeadm.go:310] W0920 19:10:48.780113    2961 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:10:48.796096  302538 kubeadm.go:310] W0920 19:10:48.780928    2961 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:10:48.910958  302538 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:10:52.299257  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:10:52.299561  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:10:56.716717  302538 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:10:56.716805  302538 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:10:56.716938  302538 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:10:56.717078  302538 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:10:56.717170  302538 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:10:56.717225  302538 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:10:56.719086  302538 out.go:235]   - Generating certificates and keys ...
	I0920 19:10:56.719199  302538 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:10:56.719286  302538 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:10:56.719407  302538 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:10:56.719505  302538 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:10:56.719624  302538 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:10:56.719720  302538 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:10:56.719811  302538 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:10:56.719928  302538 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:10:56.720049  302538 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:10:56.720154  302538 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:10:56.720224  302538 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:10:56.720287  302538 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:10:56.720334  302538 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:10:56.720386  302538 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:10:56.720432  302538 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:10:56.720486  302538 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:10:56.720533  302538 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:10:56.720606  302538 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:10:56.720701  302538 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:10:56.722504  302538 out.go:235]   - Booting up control plane ...
	I0920 19:10:56.722620  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:10:56.722748  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:10:56.722872  302538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:10:56.723020  302538 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:10:56.723105  302538 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:10:56.723148  302538 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:10:56.723337  302538 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:10:56.723455  302538 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:10:56.723515  302538 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.448196ms
	I0920 19:10:56.723612  302538 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:10:56.723706  302538 kubeadm.go:310] [api-check] The API server is healthy after 5.001495273s
	I0920 19:10:56.723888  302538 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:10:56.724046  302538 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:10:56.724131  302538 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:10:56.724406  302538 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-037711 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:10:56.724464  302538 kubeadm.go:310] [bootstrap-token] Using token: 2hi1gl.ipidz4nvj8gip8th
	I0920 19:10:56.726099  302538 out.go:235]   - Configuring RBAC rules ...
	I0920 19:10:56.726212  302538 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:10:56.726315  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:10:56.726479  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:10:56.726641  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:10:56.726794  302538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:10:56.726926  302538 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:10:56.727082  302538 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:10:56.727154  302538 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:10:56.727202  302538 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:10:56.727209  302538 kubeadm.go:310] 
	I0920 19:10:56.727261  302538 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:10:56.727267  302538 kubeadm.go:310] 
	I0920 19:10:56.727363  302538 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:10:56.727383  302538 kubeadm.go:310] 
	I0920 19:10:56.727424  302538 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:10:56.727507  302538 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:10:56.727607  302538 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:10:56.727620  302538 kubeadm.go:310] 
	I0920 19:10:56.727699  302538 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:10:56.727712  302538 kubeadm.go:310] 
	I0920 19:10:56.727775  302538 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:10:56.727790  302538 kubeadm.go:310] 
	I0920 19:10:56.727865  302538 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:10:56.727969  302538 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:10:56.728032  302538 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:10:56.728038  302538 kubeadm.go:310] 
	I0920 19:10:56.728106  302538 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:10:56.728171  302538 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:10:56.728177  302538 kubeadm.go:310] 
	I0920 19:10:56.728271  302538 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2hi1gl.ipidz4nvj8gip8th \
	I0920 19:10:56.728406  302538 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 \
	I0920 19:10:56.728438  302538 kubeadm.go:310] 	--control-plane 
	I0920 19:10:56.728451  302538 kubeadm.go:310] 
	I0920 19:10:56.728571  302538 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:10:56.728577  302538 kubeadm.go:310] 
	I0920 19:10:56.728675  302538 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2hi1gl.ipidz4nvj8gip8th \
	I0920 19:10:56.728823  302538 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3cde02adedbccf93dcef34eae7ba9dab1e91fdcc7f9f256c9ab2d442394a06b7 
	I0920 19:10:56.728837  302538 cni.go:84] Creating CNI manager for ""
	I0920 19:10:56.728843  302538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:10:56.730851  302538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:10:56.732462  302538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:10:56.745326  302538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:10:56.764458  302538 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:10:56.764563  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:56.764620  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-037711 minikube.k8s.io/updated_at=2024_09_20T19_10_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=no-preload-037711 minikube.k8s.io/primary=true
	I0920 19:10:56.792026  302538 ops.go:34] apiserver oom_adj: -16
	I0920 19:10:56.976178  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:57.477172  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:57.977076  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:58.476357  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:58.977162  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:59.476924  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:10:59.976506  302538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:11:00.080925  302538 kubeadm.go:1113] duration metric: took 3.316440483s to wait for elevateKubeSystemPrivileges
	I0920 19:11:00.080968  302538 kubeadm.go:394] duration metric: took 4m58.701872852s to StartCluster
	I0920 19:11:00.080994  302538 settings.go:142] acquiring lock: {Name:mk3f8745a69cfd8f32f3909bae43a052a514c07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:11:00.081106  302538 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 19:11:00.082815  302538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-237658/kubeconfig: {Name:mk4e1cfa5f3a4a9f1e47cbfe019ab8af3c1e68fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:11:00.083064  302538 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:11:00.083137  302538 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:11:00.083243  302538 addons.go:69] Setting storage-provisioner=true in profile "no-preload-037711"
	I0920 19:11:00.083263  302538 addons.go:234] Setting addon storage-provisioner=true in "no-preload-037711"
	W0920 19:11:00.083272  302538 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:11:00.083263  302538 addons.go:69] Setting default-storageclass=true in profile "no-preload-037711"
	I0920 19:11:00.083299  302538 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-037711"
	I0920 19:11:00.083308  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.083304  302538 addons.go:69] Setting metrics-server=true in profile "no-preload-037711"
	I0920 19:11:00.083342  302538 addons.go:234] Setting addon metrics-server=true in "no-preload-037711"
	W0920 19:11:00.083354  302538 addons.go:243] addon metrics-server should already be in state true
	I0920 19:11:00.083385  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.083315  302538 config.go:182] Loaded profile config "no-preload-037711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:11:00.083667  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083709  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083715  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.083750  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.083864  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.083912  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.084969  302538 out.go:177] * Verifying Kubernetes components...
	I0920 19:11:00.086652  302538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:11:00.102128  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0920 19:11:00.102362  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33853
	I0920 19:11:00.102750  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0920 19:11:00.102879  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103041  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103431  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.103635  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.103651  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.103767  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.103783  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.104022  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.104040  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.104042  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104180  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104383  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.104394  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.104842  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.104881  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.104927  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.104963  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.107816  302538 addons.go:234] Setting addon default-storageclass=true in "no-preload-037711"
	W0920 19:11:00.107836  302538 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:11:00.107865  302538 host.go:66] Checking if "no-preload-037711" exists ...
	I0920 19:11:00.108193  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.108236  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.121661  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0920 19:11:00.122693  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.123520  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.123642  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.124299  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.124530  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.125624  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I0920 19:11:00.126343  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.126439  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I0920 19:11:00.126868  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.126947  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.127277  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.127302  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.127572  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.127599  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.127646  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.127902  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.128095  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.128318  302538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:11:00.128360  302538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:11:00.129099  302538 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:11:00.129788  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.130688  302538 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:11:00.130713  302538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:11:00.130732  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.131393  302538 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:11:00.132404  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:11:00.132432  302538 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:11:00.132454  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.134112  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.134627  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.134690  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.135041  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.135215  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.135448  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.135550  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.136315  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.136816  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.136849  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.137011  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.137231  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.137409  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.137589  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.166369  302538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0920 19:11:00.166884  302538 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:11:00.167464  302538 main.go:141] libmachine: Using API Version  1
	I0920 19:11:00.167483  302538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:11:00.167850  302538 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:11:00.168037  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetState
	I0920 19:11:00.169668  302538 main.go:141] libmachine: (no-preload-037711) Calling .DriverName
	I0920 19:11:00.169875  302538 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:11:00.169891  302538 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:11:00.169925  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHHostname
	I0920 19:11:00.172907  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.173383  302538 main.go:141] libmachine: (no-preload-037711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:14", ip: ""} in network mk-no-preload-037711: {Iface:virbr3 ExpiryTime:2024-09-20 20:05:35 +0000 UTC Type:0 Mac:52:54:00:b0:ac:14 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:no-preload-037711 Clientid:01:52:54:00:b0:ac:14}
	I0920 19:11:00.173416  302538 main.go:141] libmachine: (no-preload-037711) DBG | domain no-preload-037711 has defined IP address 192.168.61.136 and MAC address 52:54:00:b0:ac:14 in network mk-no-preload-037711
	I0920 19:11:00.173577  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHPort
	I0920 19:11:00.173820  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHKeyPath
	I0920 19:11:00.174010  302538 main.go:141] libmachine: (no-preload-037711) Calling .GetSSHUsername
	I0920 19:11:00.174212  302538 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/no-preload-037711/id_rsa Username:docker}
	I0920 19:11:00.275468  302538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:11:00.290839  302538 node_ready.go:35] waiting up to 6m0s for node "no-preload-037711" to be "Ready" ...
	I0920 19:11:00.300222  302538 node_ready.go:49] node "no-preload-037711" has status "Ready":"True"
	I0920 19:11:00.300244  302538 node_ready.go:38] duration metric: took 9.368069ms for node "no-preload-037711" to be "Ready" ...
	I0920 19:11:00.300253  302538 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:11:00.306099  302538 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:00.364927  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:11:00.364956  302538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:11:00.382910  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:11:00.392581  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:11:00.392611  302538 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:11:00.404275  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:11:00.442677  302538 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:11:00.442707  302538 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:11:00.500976  302538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:11:01.337157  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337196  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337169  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337265  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337558  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.337573  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.337600  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337613  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.337641  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337649  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337685  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337702  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.337711  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.337720  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.337961  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.337978  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.338064  302538 main.go:141] libmachine: (no-preload-037711) DBG | Closing plugin on server side
	I0920 19:11:01.338114  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.338133  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.395956  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.395989  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.396327  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.396355  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580133  302538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.079115769s)
	I0920 19:11:01.580188  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.580203  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.580548  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.580568  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580578  302538 main.go:141] libmachine: Making call to close driver server
	I0920 19:11:01.580586  302538 main.go:141] libmachine: (no-preload-037711) Calling .Close
	I0920 19:11:01.580817  302538 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:11:01.580842  302538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:11:01.580853  302538 addons.go:475] Verifying addon metrics-server=true in "no-preload-037711"
	I0920 19:11:01.582786  302538 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 19:11:01.584283  302538 addons.go:510] duration metric: took 1.501156808s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 19:11:02.314471  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:04.817174  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:07.312399  302538 pod_ready.go:103] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:07.812969  302538 pod_ready.go:93] pod "etcd-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:07.812999  302538 pod_ready.go:82] duration metric: took 7.506877081s for pod "etcd-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.813008  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.818172  302538 pod_ready.go:93] pod "kube-apiserver-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:07.818200  302538 pod_ready.go:82] duration metric: took 5.184579ms for pod "kube-apiserver-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:07.818211  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:09.825772  302538 pod_ready.go:103] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"False"
	I0920 19:11:10.325453  302538 pod_ready.go:93] pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:10.325479  302538 pod_ready.go:82] duration metric: took 2.507262085s for pod "kube-controller-manager-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.325489  302538 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.331181  302538 pod_ready.go:93] pod "kube-scheduler-no-preload-037711" in "kube-system" namespace has status "Ready":"True"
	I0920 19:11:10.331208  302538 pod_ready.go:82] duration metric: took 5.711573ms for pod "kube-scheduler-no-preload-037711" in "kube-system" namespace to be "Ready" ...
	I0920 19:11:10.331216  302538 pod_ready.go:39] duration metric: took 10.030954081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:11:10.331233  302538 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:11:10.331286  302538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:11:10.348104  302538 api_server.go:72] duration metric: took 10.265008499s to wait for apiserver process to appear ...
	I0920 19:11:10.348135  302538 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:11:10.348157  302538 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0920 19:11:10.352242  302538 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0920 19:11:10.353228  302538 api_server.go:141] control plane version: v1.31.1
	I0920 19:11:10.353249  302538 api_server.go:131] duration metric: took 5.107446ms to wait for apiserver health ...
	I0920 19:11:10.353257  302538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:11:10.358560  302538 system_pods.go:59] 9 kube-system pods found
	I0920 19:11:10.358588  302538 system_pods.go:61] "coredns-7c65d6cfc9-gdfh9" [61c6d6d8-62b9-4db3-a3c3-fd0daec82a9f] Running
	I0920 19:11:10.358593  302538 system_pods.go:61] "coredns-7c65d6cfc9-h84nm" [6ada3ba7-1ccd-474b-850b-c00a77dfbb92] Running
	I0920 19:11:10.358597  302538 system_pods.go:61] "etcd-no-preload-037711" [9ace2dcd-0562-46d5-99be-65be4ea053d9] Running
	I0920 19:11:10.358601  302538 system_pods.go:61] "kube-apiserver-no-preload-037711" [1dbfa130-d2dd-420d-a32c-1e82b535c112] Running
	I0920 19:11:10.358604  302538 system_pods.go:61] "kube-controller-manager-no-preload-037711" [56462390-dedd-4281-ac85-2671f7a10cb1] Running
	I0920 19:11:10.358607  302538 system_pods.go:61] "kube-proxy-bvfqh" [2170ef3f-58f0-4d42-9f15-d9c952e0e2ec] Running
	I0920 19:11:10.358610  302538 system_pods.go:61] "kube-scheduler-no-preload-037711" [e996ce53-7ee6-4d1d-bd0b-8188d76966b9] Running
	I0920 19:11:10.358617  302538 system_pods.go:61] "metrics-server-6867b74b74-rpfqm" [ba7c8518-6c3e-4751-a9a5-29c77990a29c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:11:10.358620  302538 system_pods.go:61] "storage-provisioner" [e7f05c0a-c6be-4e68-959e-966c17c9cc5e] Running
	I0920 19:11:10.358629  302538 system_pods.go:74] duration metric: took 5.365343ms to wait for pod list to return data ...
	I0920 19:11:10.358635  302538 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:11:10.361229  302538 default_sa.go:45] found service account: "default"
	I0920 19:11:10.361255  302538 default_sa.go:55] duration metric: took 2.612292ms for default service account to be created ...
	I0920 19:11:10.361264  302538 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:11:10.367188  302538 system_pods.go:86] 9 kube-system pods found
	I0920 19:11:10.367221  302538 system_pods.go:89] "coredns-7c65d6cfc9-gdfh9" [61c6d6d8-62b9-4db3-a3c3-fd0daec82a9f] Running
	I0920 19:11:10.367229  302538 system_pods.go:89] "coredns-7c65d6cfc9-h84nm" [6ada3ba7-1ccd-474b-850b-c00a77dfbb92] Running
	I0920 19:11:10.367235  302538 system_pods.go:89] "etcd-no-preload-037711" [9ace2dcd-0562-46d5-99be-65be4ea053d9] Running
	I0920 19:11:10.367241  302538 system_pods.go:89] "kube-apiserver-no-preload-037711" [1dbfa130-d2dd-420d-a32c-1e82b535c112] Running
	I0920 19:11:10.367248  302538 system_pods.go:89] "kube-controller-manager-no-preload-037711" [56462390-dedd-4281-ac85-2671f7a10cb1] Running
	I0920 19:11:10.367254  302538 system_pods.go:89] "kube-proxy-bvfqh" [2170ef3f-58f0-4d42-9f15-d9c952e0e2ec] Running
	I0920 19:11:10.367260  302538 system_pods.go:89] "kube-scheduler-no-preload-037711" [e996ce53-7ee6-4d1d-bd0b-8188d76966b9] Running
	I0920 19:11:10.367267  302538 system_pods.go:89] "metrics-server-6867b74b74-rpfqm" [ba7c8518-6c3e-4751-a9a5-29c77990a29c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:11:10.367273  302538 system_pods.go:89] "storage-provisioner" [e7f05c0a-c6be-4e68-959e-966c17c9cc5e] Running
	I0920 19:11:10.367283  302538 system_pods.go:126] duration metric: took 6.01247ms to wait for k8s-apps to be running ...
	I0920 19:11:10.367292  302538 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:11:10.367354  302538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:11:10.381551  302538 system_svc.go:56] duration metric: took 14.250301ms WaitForService to wait for kubelet
	I0920 19:11:10.381582  302538 kubeadm.go:582] duration metric: took 10.298492318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:11:10.381601  302538 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:11:10.385405  302538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:11:10.385442  302538 node_conditions.go:123] node cpu capacity is 2
	I0920 19:11:10.385455  302538 node_conditions.go:105] duration metric: took 3.849463ms to run NodePressure ...
	I0920 19:11:10.385468  302538 start.go:241] waiting for startup goroutines ...
	I0920 19:11:10.385474  302538 start.go:246] waiting for cluster config update ...
	I0920 19:11:10.385485  302538 start.go:255] writing updated cluster config ...
	I0920 19:11:10.385786  302538 ssh_runner.go:195] Run: rm -f paused
	I0920 19:11:10.436362  302538 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:11:10.438538  302538 out.go:177] * Done! kubectl is now configured to use "no-preload-037711" cluster and "default" namespace by default
	I0920 19:11:32.301334  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:11:32.302020  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:11:32.302048  303486 kubeadm.go:310] 
	I0920 19:11:32.302147  303486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:11:32.302252  303486 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:11:32.302279  303486 kubeadm.go:310] 
	I0920 19:11:32.302366  303486 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:11:32.302453  303486 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:11:32.302713  303486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:11:32.302731  303486 kubeadm.go:310] 
	I0920 19:11:32.303023  303486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:11:32.303099  303486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:11:32.303200  303486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:11:32.303232  303486 kubeadm.go:310] 
	I0920 19:11:32.303438  303486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:11:32.303669  303486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:11:32.303699  303486 kubeadm.go:310] 
	I0920 19:11:32.303965  303486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:11:32.304199  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:11:32.304410  303486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:11:32.304577  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:11:32.304624  303486 kubeadm.go:310] 
	I0920 19:11:32.305105  303486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:11:32.305465  303486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:11:32.305655  303486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 19:11:32.305713  303486 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 19:11:32.305758  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:11:32.760742  303486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:11:32.775675  303486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:11:32.785785  303486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:11:32.785806  303486 kubeadm.go:157] found existing configuration files:
	
	I0920 19:11:32.785854  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:11:32.795133  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:11:32.795210  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:11:32.805681  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:11:32.815299  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:11:32.815362  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:11:32.827215  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:11:32.836597  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:11:32.836682  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:11:32.846621  303486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:11:32.855610  303486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:11:32.855675  303486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:11:32.866824  303486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:11:33.103745  303486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:13:29.101212  303486 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:13:29.101347  303486 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 19:13:29.103031  303486 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:13:29.103142  303486 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:13:29.103216  303486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:13:29.103318  303486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:13:29.103437  303486 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:13:29.103507  303486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:13:29.105521  303486 out.go:235]   - Generating certificates and keys ...
	I0920 19:13:29.105622  303486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:13:29.105704  303486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:13:29.105820  303486 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:13:29.105955  303486 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:13:29.106058  303486 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:13:29.106132  303486 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:13:29.106219  303486 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:13:29.106318  303486 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:13:29.106430  303486 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:13:29.106548  303486 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:13:29.106611  303486 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:13:29.106699  303486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:13:29.106766  303486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:13:29.106844  303486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:13:29.106935  303486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:13:29.107011  303486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:13:29.107117  303486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:13:29.107223  303486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:13:29.107289  303486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:13:29.107376  303486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:13:29.108804  303486 out.go:235]   - Booting up control plane ...
	I0920 19:13:29.108952  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:13:29.109021  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:13:29.109082  303486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:13:29.109166  303486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:13:29.109313  303486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:13:29.109359  303486 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:13:29.109462  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.109630  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.109699  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.109878  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.109966  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110133  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110213  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110382  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110441  303486 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:13:29.110606  303486 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:13:29.110616  303486 kubeadm.go:310] 
	I0920 19:13:29.110661  303486 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:13:29.110699  303486 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:13:29.110706  303486 kubeadm.go:310] 
	I0920 19:13:29.110739  303486 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:13:29.110769  303486 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:13:29.110866  303486 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:13:29.110875  303486 kubeadm.go:310] 
	I0920 19:13:29.110969  303486 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:13:29.111003  303486 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:13:29.111031  303486 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:13:29.111037  303486 kubeadm.go:310] 
	I0920 19:13:29.111141  303486 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:13:29.111224  303486 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:13:29.111231  303486 kubeadm.go:310] 
	I0920 19:13:29.111327  303486 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:13:29.111407  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:13:29.111481  303486 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:13:29.111542  303486 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:13:29.111610  303486 kubeadm.go:394] duration metric: took 7m56.768319159s to StartCluster
	I0920 19:13:29.111640  303486 kubeadm.go:310] 
	I0920 19:13:29.111664  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:13:29.111734  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:13:29.157817  303486 cri.go:89] found id: ""
	I0920 19:13:29.157849  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.157859  303486 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:13:29.157867  303486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:13:29.157950  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:13:29.192130  303486 cri.go:89] found id: ""
	I0920 19:13:29.192164  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.192179  303486 logs.go:278] No container was found matching "etcd"
	I0920 19:13:29.192187  303486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:13:29.192243  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:13:29.227594  303486 cri.go:89] found id: ""
	I0920 19:13:29.227631  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.227642  303486 logs.go:278] No container was found matching "coredns"
	I0920 19:13:29.227651  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:13:29.227724  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:13:29.261948  303486 cri.go:89] found id: ""
	I0920 19:13:29.261981  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.261996  303486 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:13:29.262004  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:13:29.262072  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:13:29.295148  303486 cri.go:89] found id: ""
	I0920 19:13:29.295181  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.295191  303486 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:13:29.295200  303486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:13:29.295270  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:13:29.328094  303486 cri.go:89] found id: ""
	I0920 19:13:29.328127  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.328135  303486 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:13:29.328142  303486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:13:29.328194  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:13:29.368830  303486 cri.go:89] found id: ""
	I0920 19:13:29.368870  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.368878  303486 logs.go:278] No container was found matching "kindnet"
	I0920 19:13:29.368885  303486 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:13:29.368947  303486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:13:29.420051  303486 cri.go:89] found id: ""
	I0920 19:13:29.420081  303486 logs.go:276] 0 containers: []
	W0920 19:13:29.420091  303486 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 19:13:29.420106  303486 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:13:29.420123  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:13:29.498322  303486 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 19:13:29.498350  303486 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:13:29.498364  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:13:29.601796  303486 logs.go:123] Gathering logs for container status ...
	I0920 19:13:29.601842  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:13:29.644325  303486 logs.go:123] Gathering logs for kubelet ...
	I0920 19:13:29.644368  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:13:29.692691  303486 logs.go:123] Gathering logs for dmesg ...
	I0920 19:13:29.692736  303486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0920 19:13:29.707508  303486 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 19:13:29.707577  303486 out.go:270] * 
	W0920 19:13:29.707646  303486 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:13:29.707664  303486 out.go:270] * 
	W0920 19:13:29.708560  303486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 19:13:29.711313  303486 out.go:201] 
	W0920 19:13:29.712520  303486 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:13:29.712553  303486 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 19:13:29.712576  303486 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 19:13:29.713832  303486 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.612183099Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860320612159479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=710c66ed-ef1f-4262-98ba-8be8bd6713f6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.612865933Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=024beb51-3772-4b7a-99e5-9754f6bb0ef4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.612959246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=024beb51-3772-4b7a-99e5-9754f6bb0ef4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.613029418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=024beb51-3772-4b7a-99e5-9754f6bb0ef4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.648399142Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2be9a709-50c1-4b69-8ddc-4b9483cd3532 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.648560295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2be9a709-50c1-4b69-8ddc-4b9483cd3532 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.650271638Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d123fb7-2282-444e-afba-283a4d1fefe2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.650809733Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860320650763382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d123fb7-2282-444e-afba-283a4d1fefe2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.651512033Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aca6659f-c8a4-4561-a8f1-8f244281caac name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.651598389Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aca6659f-c8a4-4561-a8f1-8f244281caac name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.651651338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=aca6659f-c8a4-4561-a8f1-8f244281caac name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.685673547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=824b6d5a-c8e3-4241-91bd-6522d1217ea9 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.685788198Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=824b6d5a-c8e3-4241-91bd-6522d1217ea9 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.687317868Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a69c551b-bc6d-4c57-8513-e13e58fb30b7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.687971817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860320687933981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a69c551b-bc6d-4c57-8513-e13e58fb30b7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.688666309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17b2d1ef-677b-4ac5-a109-5a6dbb72fe19 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.688769656Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17b2d1ef-677b-4ac5-a109-5a6dbb72fe19 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.688821824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=17b2d1ef-677b-4ac5-a109-5a6dbb72fe19 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.722376624Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30395176-2505-4b09-b64b-33064ef91e23 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.722497160Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30395176-2505-4b09-b64b-33064ef91e23 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.723856705Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d0b2135-fb17-4e5a-90ef-dda8924c906d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.724233425Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860320724208484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d0b2135-fb17-4e5a-90ef-dda8924c906d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.724972107Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08155372-fd3e-4d0d-9cfb-c5c3686f3f26 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.725021520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08155372-fd3e-4d0d-9cfb-c5c3686f3f26 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:25:20 old-k8s-version-425599 crio[625]: time="2024-09-20 19:25:20.725065503Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=08155372-fd3e-4d0d-9cfb-c5c3686f3f26 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep20 19:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051564] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038083] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.892466] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.024288] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.561052] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.228283] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.074163] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.093321] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.193588] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.158367] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.273001] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +6.667482] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.066383] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.180794] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[ +11.395339] kauditd_printk_skb: 46 callbacks suppressed
	[Sep20 19:09] systemd-fstab-generator[5034]: Ignoring "noauto" option for root device
	[Sep20 19:11] systemd-fstab-generator[5316]: Ignoring "noauto" option for root device
	[  +0.064997] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:25:20 up 20 min,  0 users,  load average: 0.02, 0.05, 0.04
	Linux old-k8s-version-425599 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]:         /usr/local/go/src/net/lookup.go:299 +0x685
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc000454e40, 0x48ab5d6, 0x3, 0xc000793830, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000454e40, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000793830, 0x24, 0x0, ...)
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]: net.(*Dialer).DialContext(0xc000c66ea0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000793830, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000c6be60, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000793830, 0x24, 0x60, 0x7f242dc7ac88, 0x118, ...)
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]: net/http.(*Transport).dial(0xc000c72000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000793830, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]: net/http.(*Transport).dialConn(0xc000c72000, 0x4f7fe00, 0xc000052030, 0x0, 0xc000220e40, 0x5, 0xc000793830, 0x24, 0x0, 0xc000305e60, ...)
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]: net/http.(*Transport).dialConnFor(0xc000c72000, 0xc000042840)
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]: created by net/http.(*Transport).queueForDial
	Sep 20 19:25:16 old-k8s-version-425599 kubelet[6834]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Sep 20 19:25:17 old-k8s-version-425599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 143.
	Sep 20 19:25:17 old-k8s-version-425599 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 20 19:25:17 old-k8s-version-425599 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 20 19:25:17 old-k8s-version-425599 kubelet[6844]: I0920 19:25:17.626737    6844 server.go:416] Version: v1.20.0
	Sep 20 19:25:17 old-k8s-version-425599 kubelet[6844]: I0920 19:25:17.627110    6844 server.go:837] Client rotation is on, will bootstrap in background
	Sep 20 19:25:17 old-k8s-version-425599 kubelet[6844]: I0920 19:25:17.629083    6844 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 20 19:25:17 old-k8s-version-425599 kubelet[6844]: W0920 19:25:17.630039    6844 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 20 19:25:17 old-k8s-version-425599 kubelet[6844]: I0920 19:25:17.630247    6844 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-425599 -n old-k8s-version-425599
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-425599 -n old-k8s-version-425599: exit status 2 (240.893466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-425599" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (165.28s)

                                                
                                    

Test pass (241/311)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.36
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 11.94
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.1
18 TestDownloadOnly/v1.31.1/DeleteAll 0.14
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.65
22 TestOffline 104.72
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 136.19
31 TestAddons/serial/GCPAuth/Namespaces 0.15
35 TestAddons/parallel/InspektorGadget 11.8
38 TestAddons/parallel/CSI 59.96
39 TestAddons/parallel/Headlamp 20.1
40 TestAddons/parallel/CloudSpanner 7.13
41 TestAddons/parallel/LocalPath 13.43
42 TestAddons/parallel/NvidiaDevicePlugin 6.77
43 TestAddons/parallel/Yakd 12.08
44 TestAddons/StoppedEnableDisable 92.72
45 TestCertOptions 43.46
46 TestCertExpiration 312.96
48 TestForceSystemdFlag 60.79
49 TestForceSystemdEnv 42.26
51 TestKVMDriverInstallOrUpdate 4.18
55 TestErrorSpam/setup 41.12
56 TestErrorSpam/start 0.37
57 TestErrorSpam/status 0.74
58 TestErrorSpam/pause 1.53
59 TestErrorSpam/unpause 1.69
60 TestErrorSpam/stop 4.44
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 54.92
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 39.79
67 TestFunctional/serial/KubeContext 0.05
68 TestFunctional/serial/KubectlGetPods 0.15
71 TestFunctional/serial/CacheCmd/cache/add_remote 5.24
72 TestFunctional/serial/CacheCmd/cache/add_local 2.54
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.12
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 34.81
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.38
83 TestFunctional/serial/LogsFileCmd 1.32
84 TestFunctional/serial/InvalidService 4.34
86 TestFunctional/parallel/ConfigCmd 0.37
87 TestFunctional/parallel/DashboardCmd 32.38
88 TestFunctional/parallel/DryRun 0.29
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 1.04
94 TestFunctional/parallel/ServiceCmdConnect 9.64
95 TestFunctional/parallel/AddonsCmd 0.13
96 TestFunctional/parallel/PersistentVolumeClaim 44.7
98 TestFunctional/parallel/SSHCmd 0.39
99 TestFunctional/parallel/CpCmd 1.47
100 TestFunctional/parallel/MySQL 33.1
101 TestFunctional/parallel/FileSync 0.22
102 TestFunctional/parallel/CertSync 1.44
106 TestFunctional/parallel/NodeLabels 0.08
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
110 TestFunctional/parallel/License 1.08
111 TestFunctional/parallel/ServiceCmd/DeployApp 11.26
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
113 TestFunctional/parallel/ProfileCmd/profile_list 0.39
114 TestFunctional/parallel/MountCmd/any-port 12.68
115 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
116 TestFunctional/parallel/Version/short 0.05
117 TestFunctional/parallel/Version/components 0.7
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.89
121 TestFunctional/parallel/ImageCommands/ImageListYaml 1.32
123 TestFunctional/parallel/ImageCommands/Setup 1.82
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.55
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.7
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.61
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.89
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
143 TestFunctional/parallel/ServiceCmd/List 0.27
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.26
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
146 TestFunctional/parallel/ServiceCmd/Format 0.28
147 TestFunctional/parallel/ServiceCmd/URL 0.28
148 TestFunctional/parallel/MountCmd/specific-port 1.85
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.49
150 TestFunctional/delete_echo-server_images 0.03
151 TestFunctional/delete_my-image_image 0.02
152 TestFunctional/delete_minikube_cached_images 0.02
156 TestMultiControlPlane/serial/StartCluster 199.78
157 TestMultiControlPlane/serial/DeployApp 7.92
158 TestMultiControlPlane/serial/PingHostFromPods 1.22
159 TestMultiControlPlane/serial/AddWorkerNode 56.74
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
162 TestMultiControlPlane/serial/CopyFile 13.25
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.03
168 TestMultiControlPlane/serial/DeleteSecondaryNode 16.63
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
171 TestMultiControlPlane/serial/RestartCluster 349.76
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
173 TestMultiControlPlane/serial/AddSecondaryNode 79.91
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
178 TestJSONOutput/start/Command 52.3
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.69
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.64
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 6.65
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.21
206 TestMainNoArgs 0.05
207 TestMinikubeProfile 88.83
210 TestMountStart/serial/StartWithMountFirst 24.7
211 TestMountStart/serial/VerifyMountFirst 0.38
212 TestMountStart/serial/StartWithMountSecond 25.14
213 TestMountStart/serial/VerifyMountSecond 0.38
214 TestMountStart/serial/DeleteFirst 0.7
215 TestMountStart/serial/VerifyMountPostDelete 0.38
216 TestMountStart/serial/Stop 1.28
217 TestMountStart/serial/RestartStopped 21.18
218 TestMountStart/serial/VerifyMountPostStop 0.39
221 TestMultiNode/serial/FreshStart2Nodes 116.27
222 TestMultiNode/serial/DeployApp2Nodes 5.82
223 TestMultiNode/serial/PingHostFrom2Pods 0.83
224 TestMultiNode/serial/AddNode 49.75
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.6
227 TestMultiNode/serial/CopyFile 7.3
228 TestMultiNode/serial/StopNode 2.24
229 TestMultiNode/serial/StartAfterStop 39.57
231 TestMultiNode/serial/DeleteNode 2.14
233 TestMultiNode/serial/RestartMultiNode 178.03
234 TestMultiNode/serial/ValidateNameConflict 44.72
241 TestScheduledStopUnix 114.56
245 TestRunningBinaryUpgrade 164.42
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
251 TestNoKubernetes/serial/StartWithK8s 114.94
259 TestNetworkPlugins/group/false 3.17
263 TestNoKubernetes/serial/StartWithStopK8s 41.06
264 TestNoKubernetes/serial/Start 27.27
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
266 TestNoKubernetes/serial/ProfileList 1.29
267 TestNoKubernetes/serial/Stop 1.3
268 TestNoKubernetes/serial/StartNoArgs 20.68
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
270 TestStoppedBinaryUpgrade/Setup 2.35
271 TestStoppedBinaryUpgrade/Upgrade 115.19
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
281 TestPause/serial/Start 78.67
282 TestNetworkPlugins/group/auto/Start 78.9
283 TestNetworkPlugins/group/kindnet/Start 102.35
284 TestNetworkPlugins/group/calico/Start 111.91
286 TestNetworkPlugins/group/auto/KubeletFlags 0.38
287 TestNetworkPlugins/group/auto/NetCatPod 13.28
288 TestNetworkPlugins/group/auto/DNS 0.18
289 TestNetworkPlugins/group/auto/Localhost 0.15
290 TestNetworkPlugins/group/auto/HairPin 0.16
291 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
292 TestNetworkPlugins/group/custom-flannel/Start 73.04
293 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
294 TestNetworkPlugins/group/kindnet/NetCatPod 12.28
295 TestNetworkPlugins/group/kindnet/DNS 0.19
296 TestNetworkPlugins/group/kindnet/Localhost 0.15
297 TestNetworkPlugins/group/kindnet/HairPin 0.14
298 TestNetworkPlugins/group/enable-default-cni/Start 67.62
299 TestNetworkPlugins/group/flannel/Start 98.26
300 TestNetworkPlugins/group/calico/ControllerPod 6.01
301 TestNetworkPlugins/group/calico/KubeletFlags 0.23
302 TestNetworkPlugins/group/calico/NetCatPod 11.23
303 TestNetworkPlugins/group/calico/DNS 0.15
304 TestNetworkPlugins/group/calico/Localhost 0.14
305 TestNetworkPlugins/group/calico/HairPin 0.15
306 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
307 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.37
308 TestNetworkPlugins/group/bridge/Start 58.91
309 TestNetworkPlugins/group/custom-flannel/DNS 0.21
310 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
311 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
312 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
313 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.31
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
320 TestStartStop/group/no-preload/serial/FirstStart 79.2
321 TestNetworkPlugins/group/flannel/ControllerPod 6.01
322 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
323 TestNetworkPlugins/group/bridge/NetCatPod 12.26
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
325 TestNetworkPlugins/group/flannel/NetCatPod 14.28
326 TestNetworkPlugins/group/bridge/DNS 0.17
327 TestNetworkPlugins/group/bridge/Localhost 0.14
328 TestNetworkPlugins/group/bridge/HairPin 0.15
329 TestNetworkPlugins/group/flannel/DNS 0.17
330 TestNetworkPlugins/group/flannel/Localhost 0.14
331 TestNetworkPlugins/group/flannel/HairPin 0.14
333 TestStartStop/group/embed-certs/serial/FirstStart 60.77
335 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 71.24
336 TestStartStop/group/no-preload/serial/DeployApp 11.28
337 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
339 TestStartStop/group/embed-certs/serial/DeployApp 11.64
340 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
342 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
346 TestStartStop/group/no-preload/serial/SecondStart 684.33
350 TestStartStop/group/embed-certs/serial/SecondStart 594.78
352 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 561.54
353 TestStartStop/group/old-k8s-version/serial/Stop 4.56
354 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
365 TestStartStop/group/newest-cni/serial/FirstStart 46.36
366 TestStartStop/group/newest-cni/serial/DeployApp 0
367 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.06
368 TestStartStop/group/newest-cni/serial/Stop 10.59
369 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
370 TestStartStop/group/newest-cni/serial/SecondStart 39.39
371 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
374 TestStartStop/group/newest-cni/serial/Pause 2.41
x
+
TestDownloadOnly/v1.20.0/json-events (23.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-591101 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-591101 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.355914983s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (23.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 17:36:00.733024  244849 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0920 17:36:00.733126  244849 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-591101
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-591101: exit status 85 (62.902179ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-591101 | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC |          |
	|         | -p download-only-591101        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:35:37
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:35:37.416177  244861 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:35:37.416297  244861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:35:37.416307  244861 out.go:358] Setting ErrFile to fd 2...
	I0920 17:35:37.416312  244861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:35:37.416514  244861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	W0920 17:35:37.416690  244861 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19679-237658/.minikube/config/config.json: open /home/jenkins/minikube-integration/19679-237658/.minikube/config/config.json: no such file or directory
	I0920 17:35:37.417322  244861 out.go:352] Setting JSON to true
	I0920 17:35:37.418325  244861 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4680,"bootTime":1726849057,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:35:37.418445  244861 start.go:139] virtualization: kvm guest
	I0920 17:35:37.421110  244861 out.go:97] [download-only-591101] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 17:35:37.421251  244861 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 17:35:37.421337  244861 notify.go:220] Checking for updates...
	I0920 17:35:37.422588  244861 out.go:169] MINIKUBE_LOCATION=19679
	I0920 17:35:37.424019  244861 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:35:37.425473  244861 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:35:37.426903  244861 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:35:37.428474  244861 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 17:35:37.430976  244861 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 17:35:37.431197  244861 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:35:37.467328  244861 out.go:97] Using the kvm2 driver based on user configuration
	I0920 17:35:37.467358  244861 start.go:297] selected driver: kvm2
	I0920 17:35:37.467364  244861 start.go:901] validating driver "kvm2" against <nil>
	I0920 17:35:37.467710  244861 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:35:37.467806  244861 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:35:37.484224  244861 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:35:37.484316  244861 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:35:37.485222  244861 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0920 17:35:37.485447  244861 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 17:35:37.485483  244861 cni.go:84] Creating CNI manager for ""
	I0920 17:35:37.485611  244861 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 17:35:37.485661  244861 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 17:35:37.486120  244861 start.go:340] cluster config:
	{Name:download-only-591101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-591101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:35:37.486319  244861 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:35:37.488487  244861 out.go:97] Downloading VM boot image ...
	I0920 17:35:37.488517  244861 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19679-237658/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 17:35:47.085429  244861 out.go:97] Starting "download-only-591101" primary control-plane node in "download-only-591101" cluster
	I0920 17:35:47.085483  244861 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 17:35:47.181496  244861 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 17:35:47.181545  244861 cache.go:56] Caching tarball of preloaded images
	I0920 17:35:47.181712  244861 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 17:35:47.183832  244861 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 17:35:47.183867  244861 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0920 17:35:47.288981  244861 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-591101 host does not exist
	  To start a cluster, run: "minikube start -p download-only-591101"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-591101
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (11.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-799771 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-799771 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.938505919s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (11.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 17:36:13.013679  244849 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0920 17:36:13.013758  244849 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-799771
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-799771: exit status 85 (102.18741ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-591101 | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC |                     |
	|         | -p download-only-591101        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| delete  | -p download-only-591101        | download-only-591101 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC | 20 Sep 24 17:36 UTC |
	| start   | -o=json --download-only        | download-only-799771 | jenkins | v1.34.0 | 20 Sep 24 17:36 UTC |                     |
	|         | -p download-only-799771        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:36:01
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:36:01.114756  245113 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:36:01.114897  245113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:36:01.114911  245113 out.go:358] Setting ErrFile to fd 2...
	I0920 17:36:01.114918  245113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:36:01.115114  245113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 17:36:01.115749  245113 out.go:352] Setting JSON to true
	I0920 17:36:01.116709  245113 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4704,"bootTime":1726849057,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:36:01.116777  245113 start.go:139] virtualization: kvm guest
	I0920 17:36:01.118820  245113 out.go:97] [download-only-799771] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:36:01.119021  245113 notify.go:220] Checking for updates...
	I0920 17:36:01.120448  245113 out.go:169] MINIKUBE_LOCATION=19679
	I0920 17:36:01.121702  245113 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:36:01.123254  245113 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:36:01.124884  245113 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:36:01.126383  245113 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 17:36:01.128857  245113 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 17:36:01.129098  245113 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:36:01.163355  245113 out.go:97] Using the kvm2 driver based on user configuration
	I0920 17:36:01.163392  245113 start.go:297] selected driver: kvm2
	I0920 17:36:01.163399  245113 start.go:901] validating driver "kvm2" against <nil>
	I0920 17:36:01.163763  245113 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:36:01.163878  245113 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19679-237658/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:36:01.180576  245113 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:36:01.180667  245113 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:36:01.181409  245113 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0920 17:36:01.181668  245113 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 17:36:01.181712  245113 cni.go:84] Creating CNI manager for ""
	I0920 17:36:01.181786  245113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 17:36:01.181799  245113 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 17:36:01.181873  245113 start.go:340] cluster config:
	{Name:download-only-799771 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-799771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:36:01.182036  245113 iso.go:125] acquiring lock: {Name:mkf1fd63548b1395dd1434eb4ce769a8a5f4b32e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:36:01.184003  245113 out.go:97] Starting "download-only-799771" primary control-plane node in "download-only-799771" cluster
	I0920 17:36:01.184039  245113 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:36:01.630930  245113 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 17:36:01.630977  245113 cache.go:56] Caching tarball of preloaded images
	I0920 17:36:01.631159  245113 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 17:36:01.633284  245113 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 17:36:01.633325  245113 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0920 17:36:01.736959  245113 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19679-237658/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-799771 host does not exist
	  To start a cluster, run: "minikube start -p download-only-799771"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-799771
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 17:36:13.661813  244849 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-242308 --alsologtostderr --binary-mirror http://127.0.0.1:46511 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-242308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-242308
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (104.72s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-106360 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-106360 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m43.082183136s)
helpers_test.go:175: Cleaning up "offline-crio-106360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-106360
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-106360: (1.636949264s)
--- PASS: TestOffline (104.72s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-679190
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-679190: exit status 85 (56.64416ms)

                                                
                                                
-- stdout --
	* Profile "addons-679190" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-679190"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-679190
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-679190: exit status 85 (55.28544ms)

                                                
                                                
-- stdout --
	* Profile "addons-679190" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-679190"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (136.19s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-679190 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-679190 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m16.192420089s)
--- PASS: TestAddons/Setup (136.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-679190 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-679190 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zvp8w" [129950eb-616d-4833-89b8-f95506fba347] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005331957s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-679190
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-679190: (5.792400176s)
--- PASS: TestAddons/parallel/InspektorGadget (11.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 17:46:47.057211  244849 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 17:46:47.064500  244849 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 17:46:47.064532  244849 kapi.go:107] duration metric: took 7.34261ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.353998ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-679190 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-679190 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0296753d-89af-4fe9-89fd-ba28d702e221] Pending
helpers_test.go:344: "task-pv-pod" [0296753d-89af-4fe9-89fd-ba28d702e221] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0296753d-89af-4fe9-89fd-ba28d702e221] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.005789931s
addons_test.go:528: (dbg) Run:  kubectl --context addons-679190 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-679190 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-679190 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-679190 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-679190 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-679190 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-679190 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [57dc2729-d54a-4fb2-9a42-ab4dda317ab3] Pending
helpers_test.go:344: "task-pv-pod-restore" [57dc2729-d54a-4fb2-9a42-ab4dda317ab3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [57dc2729-d54a-4fb2-9a42-ab4dda317ab3] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004252444s
addons_test.go:570: (dbg) Run:  kubectl --context addons-679190 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-679190 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-679190 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-679190 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.973073273s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:586: (dbg) Done: out/minikube-linux-amd64 -p addons-679190 addons disable volumesnapshots --alsologtostderr -v=1: (1.103389082s)
--- PASS: TestAddons/parallel/CSI (59.96s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-679190 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-679190 --alsologtostderr -v=1: (1.176649511s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-qf9bv" [c7672593-db27-413c-82c5-59d0890329f5] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-qf9bv" [c7672593-db27-413c-82c5-59d0890329f5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-qf9bv" [c7672593-db27-413c-82c5-59d0890329f5] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004085625s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-679190 addons disable headlamp --alsologtostderr -v=1: (5.918637761s)
--- PASS: TestAddons/parallel/Headlamp (20.10s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.13s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-k8qkq" [8c8173fb-2833-4d4c-89d4-78848dadac6e] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.042828049s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-679190
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-679190: (1.070385423s)
--- PASS: TestAddons/parallel/CloudSpanner (7.13s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-679190 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-679190 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-679190 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [dc95bbfd-4d16-43fd-a66e-b66d63c1e995] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [dc95bbfd-4d16-43fd-a66e-b66d63c1e995] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [dc95bbfd-4d16-43fd-a66e-b66d63c1e995] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004781503s
addons_test.go:938: (dbg) Run:  kubectl --context addons-679190 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 ssh "cat /opt/local-path-provisioner/pvc-4a7cfa23-ab8c-4f3b-b69f-a32cbb6790dc_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-679190 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-679190 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (13.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.77s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-b5wj9" [eb9faaf1-05e4-4f88-abbb-479f222d2664] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00386399s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-679190
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.77s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-bqnws" [e9b37329-9e1c-4014-8c87-d7d9f9205745] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005131236s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-679190 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-679190 addons disable yakd --alsologtostderr -v=1: (6.071702898s)
--- PASS: TestAddons/parallel/Yakd (12.08s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.72s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-679190
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-679190: (1m32.432985591s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-679190
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-679190
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-679190
--- PASS: TestAddons/StoppedEnableDisable (92.72s)

                                                
                                    
x
+
TestCertOptions (43.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-178420 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-178420 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (42.143875532s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-178420 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-178420 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-178420 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-178420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-178420
--- PASS: TestCertOptions (43.46s)

                                                
                                    
x
+
TestCertExpiration (312.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-648841 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-648841 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m29.240360359s)
E0920 18:48:14.011175  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-648841 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-648841 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (42.624876525s)
helpers_test.go:175: Cleaning up "cert-expiration-648841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-648841
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-648841: (1.095533048s)
--- PASS: TestCertExpiration (312.96s)

                                                
                                    
x
+
TestForceSystemdFlag (60.79s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-589067 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0920 18:47:29.487279  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-589067 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.587063456s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-589067 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-589067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-589067
--- PASS: TestForceSystemdFlag (60.79s)

                                                
                                    
x
+
TestForceSystemdEnv (42.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-160147 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-160147 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (41.460786191s)
helpers_test.go:175: Cleaning up "force-systemd-env-160147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-160147
--- PASS: TestForceSystemdEnv (42.26s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.18s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0920 18:51:09.278892  244849 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 18:51:09.279040  244849 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0920 18:51:09.312989  244849 install.go:62] docker-machine-driver-kvm2: exit status 1
W0920 18:51:09.313488  244849 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 18:51:09.313583  244849 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3844704152/001/docker-machine-driver-kvm2
I0920 18:51:09.547819  244849 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3844704152/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc00061e710 gz:0xc00061e718 tar:0xc00061e6b0 tar.bz2:0xc00061e6c0 tar.gz:0xc00061e6d0 tar.xz:0xc00061e6f0 tar.zst:0xc00061e700 tbz2:0xc00061e6c0 tgz:0xc00061e6d0 txz:0xc00061e6f0 tzst:0xc00061e700 xz:0xc00061e740 zip:0xc00061e760 zst:0xc00061e748] Getters:map[file:0xc00257e380 http:0xc000a96460 https:0xc000a964b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0920 18:51:09.547893  244849 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3844704152/001/docker-machine-driver-kvm2
I0920 18:51:11.622118  244849 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 18:51:11.622230  244849 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0920 18:51:11.663566  244849 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0920 18:51:11.663606  244849 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0920 18:51:11.663696  244849 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 18:51:11.663735  244849 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3844704152/002/docker-machine-driver-kvm2
I0920 18:51:11.703144  244849 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3844704152/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc00061e710 gz:0xc00061e718 tar:0xc00061e6b0 tar.bz2:0xc00061e6c0 tar.gz:0xc00061e6d0 tar.xz:0xc00061e6f0 tar.zst:0xc00061e700 tbz2:0xc00061e6c0 tgz:0xc00061e6d0 txz:0xc00061e6f0 tzst:0xc00061e700 xz:0xc00061e740 zip:0xc00061e760 zst:0xc00061e748] Getters:map[file:0xc0022166e0 http:0xc0005e0780 https:0xc0005e07d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0920 18:51:11.703199  244849 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3844704152/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.18s)

                                                
                                    
x
+
TestErrorSpam/setup (41.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-282278 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-282278 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-282278 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-282278 --driver=kvm2  --container-runtime=crio: (41.114866052s)
--- PASS: TestErrorSpam/setup (41.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (4.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 stop: (1.598503126s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 stop: (1.515747785s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-282278 --log_dir /tmp/nospam-282278 stop: (1.320648521s)
--- PASS: TestErrorSpam/stop (4.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19679-237658/.minikube/files/etc/test/nested/copy/244849/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-024386 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-024386 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (54.91623727s)
--- PASS: TestFunctional/serial/StartWithProxy (54.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.79s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 17:55:56.832841  244849 config.go:182] Loaded profile config "functional-024386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-024386 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-024386 --alsologtostderr -v=8: (39.793636798s)
functional_test.go:663: soft start took 39.794563344s for "functional-024386" cluster.
I0920 17:56:36.626982  244849 config.go:182] Loaded profile config "functional-024386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (39.79s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-024386 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-024386 cache add registry.k8s.io/pause:3.1: (1.731282626s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-024386 cache add registry.k8s.io/pause:3.3: (1.792694443s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-024386 cache add registry.k8s.io/pause:latest: (1.719596258s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-024386 /tmp/TestFunctionalserialCacheCmdcacheadd_local349866620/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 cache add minikube-local-cache-test:functional-024386
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-024386 cache add minikube-local-cache-test:functional-024386: (2.192169323s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 cache delete minikube-local-cache-test:functional-024386
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-024386
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024386 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (204.813026ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-024386 cache reload: (1.448390425s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 kubectl -- --context functional-024386 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-024386 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-024386 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-024386 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.805380095s)
functional_test.go:761: restart took 34.805532022s for "functional-024386" cluster.
I0920 17:57:22.135746  244849 config.go:182] Loaded profile config "functional-024386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (34.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-024386 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-024386 logs: (1.375658512s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 logs --file /tmp/TestFunctionalserialLogsFileCmd3743733549/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-024386 logs --file /tmp/TestFunctionalserialLogsFileCmd3743733549/001/logs.txt: (1.314314733s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-024386 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-024386
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-024386: exit status 115 (286.113431ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.75:32513 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-024386 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024386 config get cpus: exit status 14 (60.949915ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024386 config get cpus: exit status 14 (53.332756ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (32.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-024386 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-024386 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 255759: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (32.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-024386 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-024386 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (146.786227ms)

                                                
                                                
-- stdout --
	* [functional-024386] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:57:41.908701  255274 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:57:41.908831  255274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:57:41.908840  255274 out.go:358] Setting ErrFile to fd 2...
	I0920 17:57:41.908844  255274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:57:41.909006  255274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 17:57:41.909545  255274 out.go:352] Setting JSON to false
	I0920 17:57:41.910574  255274 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6005,"bootTime":1726849057,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:57:41.910677  255274 start.go:139] virtualization: kvm guest
	I0920 17:57:41.913026  255274 out.go:177] * [functional-024386] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:57:41.914402  255274 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 17:57:41.914459  255274 notify.go:220] Checking for updates...
	I0920 17:57:41.916519  255274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:57:41.917925  255274 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:57:41.919461  255274 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:57:41.920680  255274 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:57:41.922096  255274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:57:41.924110  255274 config.go:182] Loaded profile config "functional-024386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:57:41.924707  255274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:57:41.924781  255274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:57:41.941393  255274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34011
	I0920 17:57:41.941881  255274 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:57:41.942547  255274 main.go:141] libmachine: Using API Version  1
	I0920 17:57:41.942585  255274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:57:41.942977  255274 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:57:41.943182  255274 main.go:141] libmachine: (functional-024386) Calling .DriverName
	I0920 17:57:41.943448  255274 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:57:41.943754  255274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:57:41.943789  255274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:57:41.960894  255274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I0920 17:57:41.961405  255274 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:57:41.961987  255274 main.go:141] libmachine: Using API Version  1
	I0920 17:57:41.962017  255274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:57:41.962388  255274 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:57:41.962569  255274 main.go:141] libmachine: (functional-024386) Calling .DriverName
	I0920 17:57:42.001386  255274 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 17:57:42.002994  255274 start.go:297] selected driver: kvm2
	I0920 17:57:42.003017  255274 start.go:901] validating driver "kvm2" against &{Name:functional-024386 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-024386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.75 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:57:42.003129  255274 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:57:42.005936  255274 out.go:201] 
	W0920 17:57:42.007464  255274 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 17:57:42.008887  255274 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-024386 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-024386 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-024386 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (152.470477ms)

                                                
                                                
-- stdout --
	* [functional-024386] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:57:42.202461  255370 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:57:42.202581  255370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:57:42.202591  255370 out.go:358] Setting ErrFile to fd 2...
	I0920 17:57:42.202596  255370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:57:42.203084  255370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 17:57:42.203615  255370 out.go:352] Setting JSON to false
	I0920 17:57:42.204673  255370 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6005,"bootTime":1726849057,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:57:42.204808  255370 start.go:139] virtualization: kvm guest
	I0920 17:57:42.206662  255370 out.go:177] * [functional-024386] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0920 17:57:42.207950  255370 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 17:57:42.208001  255370 notify.go:220] Checking for updates...
	I0920 17:57:42.210511  255370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:57:42.211811  255370 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 17:57:42.213219  255370 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 17:57:42.214512  255370 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:57:42.215700  255370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:57:42.217211  255370 config.go:182] Loaded profile config "functional-024386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 17:57:42.217656  255370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:57:42.217729  255370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:57:42.238850  255370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35825
	I0920 17:57:42.239324  255370 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:57:42.239991  255370 main.go:141] libmachine: Using API Version  1
	I0920 17:57:42.240015  255370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:57:42.240343  255370 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:57:42.240567  255370 main.go:141] libmachine: (functional-024386) Calling .DriverName
	I0920 17:57:42.240872  255370 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:57:42.241228  255370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 17:57:42.241277  255370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:57:42.259645  255370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36603
	I0920 17:57:42.260162  255370 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:57:42.260668  255370 main.go:141] libmachine: Using API Version  1
	I0920 17:57:42.260698  255370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:57:42.261026  255370 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:57:42.261248  255370 main.go:141] libmachine: (functional-024386) Calling .DriverName
	I0920 17:57:42.299126  255370 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0920 17:57:42.300353  255370 start.go:297] selected driver: kvm2
	I0920 17:57:42.300372  255370 start.go:901] validating driver "kvm2" against &{Name:functional-024386 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-024386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.75 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:57:42.300539  255370 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:57:42.302943  255370 out.go:201] 
	W0920 17:57:42.304144  255370 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 17:57:42.305365  255370 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-024386 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-024386 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-c7p22" [c10ecbed-7c7c-4ab7-a14f-b6f065733d2e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-c7p22" [c10ecbed-7c7c-4ab7-a14f-b6f065733d2e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004304713s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.75:31907
functional_test.go:1675: http://192.168.39.75:31907: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-c7p22

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.75:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.75:31907
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1732fc86-2e16-49ae-9a93-d6a6fd7ada8c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003972307s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-024386 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-024386 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-024386 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-024386 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7d2f4888-a724-4510-b245-15e0b1706c39] Pending
helpers_test.go:344: "sp-pod" [7d2f4888-a724-4510-b245-15e0b1706c39] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7d2f4888-a724-4510-b245-15e0b1706c39] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004527513s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-024386 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-024386 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-024386 delete -f testdata/storage-provisioner/pod.yaml: (1.859795207s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-024386 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5f29bd96-e6ab-4326-9e61-a670692c259a] Pending
helpers_test.go:344: "sp-pod" [5f29bd96-e6ab-4326-9e61-a670692c259a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5f29bd96-e6ab-4326-9e61-a670692c259a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.004487094s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-024386 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.70s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh -n functional-024386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 cp functional-024386:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2992382404/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh -n functional-024386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh -n functional-024386 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (33.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-024386 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-8869t" [c870d6ad-61c7-4ebf-93ed-ca1da8b67ae3] Pending
helpers_test.go:344: "mysql-6cdb49bbb-8869t" [c870d6ad-61c7-4ebf-93ed-ca1da8b67ae3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-8869t" [c870d6ad-61c7-4ebf-93ed-ca1da8b67ae3] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.00427582s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-024386 exec mysql-6cdb49bbb-8869t -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-024386 exec mysql-6cdb49bbb-8869t -- mysql -ppassword -e "show databases;": exit status 1 (221.866729ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 17:58:08.766572  244849 retry.go:31] will retry after 665.711292ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-024386 exec mysql-6cdb49bbb-8869t -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-024386 exec mysql-6cdb49bbb-8869t -- mysql -ppassword -e "show databases;": exit status 1 (128.32437ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 17:58:09.561572  244849 retry.go:31] will retry after 1.384884344s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-024386 exec mysql-6cdb49bbb-8869t -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-024386 exec mysql-6cdb49bbb-8869t -- mysql -ppassword -e "show databases;": exit status 1 (274.114581ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 17:58:11.221225  244849 retry.go:31] will retry after 2.07005904s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-024386 exec mysql-6cdb49bbb-8869t -- mysql -ppassword -e "show databases;"
2024/09/20 17:58:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (33.10s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/244849/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "sudo cat /etc/test/nested/copy/244849/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/244849.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "sudo cat /etc/ssl/certs/244849.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/244849.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "sudo cat /usr/share/ca-certificates/244849.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2448492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "sudo cat /etc/ssl/certs/2448492.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2448492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "sudo cat /usr/share/ca-certificates/2448492.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-024386 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024386 ssh "sudo systemctl is-active docker": exit status 1 (266.760658ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024386 ssh "sudo systemctl is-active containerd": exit status 1 (258.218454ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Done: out/minikube-linux-amd64 license: (1.079256711s)
--- PASS: TestFunctional/parallel/License (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-024386 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-024386 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-4krkq" [49468776-3f98-40c2-8af5-bac9494ff3cf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-4krkq" [49468776-3f98-40c2-8af5-bac9494ff3cf] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.009081155s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "335.715178ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "53.446561ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-024386 /tmp/TestFunctionalparallelMountCmdany-port3440545473/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726855050270006791" to /tmp/TestFunctionalparallelMountCmdany-port3440545473/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726855050270006791" to /tmp/TestFunctionalparallelMountCmdany-port3440545473/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726855050270006791" to /tmp/TestFunctionalparallelMountCmdany-port3440545473/001/test-1726855050270006791
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024386 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (206.61673ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 17:57:30.477029  244849 retry.go:31] will retry after 576.936388ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 17:57 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 17:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 17:57 test-1726855050270006791
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh cat /mount-9p/test-1726855050270006791
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-024386 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9ec8bae5-dbb2-41d6-bce9-618941a34256] Pending
helpers_test.go:344: "busybox-mount" [9ec8bae5-dbb2-41d6-bce9-618941a34256] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9ec8bae5-dbb2-41d6-bce9-618941a34256] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9ec8bae5-dbb2-41d6-bce9-618941a34256] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.004258156s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-024386 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-024386 /tmp/TestFunctionalparallelMountCmdany-port3440545473/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.68s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "349.368082ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "59.625191ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-024386 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/kicbase/echo-server           | functional-024386  | 9056ab77afb8e | 4.94MB |
| localhost/my-image                      | functional-024386  | 44bd1deef69b6 | 1.47MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| localhost/minikube-local-cache-test     | functional-024386  | b313a6b3b8781 | 3.33kB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-024386 image ls --format table --alsologtostderr:
I0920 17:58:08.511374  256311 out.go:345] Setting OutFile to fd 1 ...
I0920 17:58:08.511491  256311 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:58:08.511501  256311 out.go:358] Setting ErrFile to fd 2...
I0920 17:58:08.511506  256311 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:58:08.511699  256311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
I0920 17:58:08.512298  256311 config.go:182] Loaded profile config "functional-024386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:58:08.512409  256311 config.go:182] Loaded profile config "functional-024386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:58:08.512818  256311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:58:08.512871  256311 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:58:08.528779  256311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
I0920 17:58:08.529383  256311 main.go:141] libmachine: () Calling .GetVersion
I0920 17:58:08.530053  256311 main.go:141] libmachine: Using API Version  1
I0920 17:58:08.530075  256311 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:58:08.530439  256311 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:58:08.530666  256311 main.go:141] libmachine: (functional-024386) Calling .GetState
I0920 17:58:08.532596  256311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:58:08.532642  256311 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:58:08.549039  256311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
I0920 17:58:08.549576  256311 main.go:141] libmachine: () Calling .GetVersion
I0920 17:58:08.550211  256311 main.go:141] libmachine: Using API Version  1
I0920 17:58:08.550254  256311 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:58:08.550613  256311 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:58:08.550804  256311 main.go:141] libmachine: (functional-024386) Calling .DriverName
I0920 17:58:08.551032  256311 ssh_runner.go:195] Run: systemctl --version
I0920 17:58:08.551073  256311 main.go:141] libmachine: (functional-024386) Calling .GetSSHHostname
I0920 17:58:08.554227  256311 main.go:141] libmachine: (functional-024386) DBG | domain functional-024386 has defined MAC address 52:54:00:08:4e:4d in network mk-functional-024386
I0920 17:58:08.554680  256311 main.go:141] libmachine: (functional-024386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:4e:4d", ip: ""} in network mk-functional-024386: {Iface:virbr1 ExpiryTime:2024-09-20 18:55:16 +0000 UTC Type:0 Mac:52:54:00:08:4e:4d Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:functional-024386 Clientid:01:52:54:00:08:4e:4d}
I0920 17:58:08.554701  256311 main.go:141] libmachine: (functional-024386) DBG | domain functional-024386 has defined IP address 192.168.39.75 and MAC address 52:54:00:08:4e:4d in network mk-functional-024386
I0920 17:58:08.554860  256311 main.go:141] libmachine: (functional-024386) Calling .GetSSHPort
I0920 17:58:08.555007  256311 main.go:141] libmachine: (functional-024386) Calling .GetSSHKeyPath
I0920 17:58:08.555127  256311 main.go:141] libmachine: (functional-024386) Calling .GetSSHUsername
I0920 17:58:08.555232  256311 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/functional-024386/id_rsa Username:docker}
I0920 17:58:08.660465  256311 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 17:58:08.754656  256311 main.go:141] libmachine: Making call to close driver server
I0920 17:58:08.754675  256311 main.go:141] libmachine: (functional-024386) Calling .Close
I0920 17:58:08.754983  256311 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:58:08.755003  256311 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 17:58:08.755021  256311 main.go:141] libmachine: Making call to close driver server
I0920 17:58:08.755030  256311 main.go:141] libmachine: (functional-024386) Calling .Close
I0920 17:58:08.755334  256311 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:58:08.755351  256311 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-024386 image ls --format json --alsologtostderr:
[{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776a
a0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"2959e2ca2c79440c72b22353246cd26fd39cca1104ae16953e33db4730512cc9","repoDigests":["docker.io/library/8350c06864438b09e00cde9dfd7f4717f040c159786b0cb935bd96e0d2443cea-tmp@sha256:5f6e4d271a6e1ef2b661e481596366e6e6ad3c423fbc9b62b9c6282357f91958"],"repoTags":[],"size":"1466018"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c
0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"b313a6b3b8781e3a6d0093c1822df653226829959ab459a72dd3a97962673d2d","repoDigests":["localhost/minikube-local-cache-test@sha256:f7b24924fc7858637f4ec9bfe6ddbf07c29300b31d080a600a9fe1d54f7efedb"],"repoTags":["localhost/minikube-local-cache-test:functional-024386"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTag
s":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a
687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-024386"],"size":"4943877"},{"id":"44bd1deef69b64938ce4e46778e0535b81623c5cecc2481e15ebc88348391a8a","repoDigests":["localhost/my-image@sha256:af46b67846d0a591029ae3f4cc6e76096a05ea0043893975a6b7efe9eaeb873a"],"repoTags":["localhost/my-image:functional-024386"],"size"
:"1468600"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61
fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-024386 image ls --format json --alsologtostderr:
I0920 17:58:07.620954  256287 out.go:345] Setting OutFile to fd 1 ...
I0920 17:58:07.621118  256287 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:58:07.621132  256287 out.go:358] Setting ErrFile to fd 2...
I0920 17:58:07.621139  256287 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:58:07.621413  256287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
I0920 17:58:07.622352  256287 config.go:182] Loaded profile config "functional-024386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:58:07.622515  256287 config.go:182] Loaded profile config "functional-024386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:58:07.623083  256287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:58:07.623144  256287 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:58:07.639030  256287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44695
I0920 17:58:07.639632  256287 main.go:141] libmachine: () Calling .GetVersion
I0920 17:58:07.640323  256287 main.go:141] libmachine: Using API Version  1
I0920 17:58:07.640344  256287 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:58:07.640693  256287 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:58:07.640914  256287 main.go:141] libmachine: (functional-024386) Calling .GetState
I0920 17:58:07.642780  256287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:58:07.642820  256287 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:58:07.657921  256287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35639
I0920 17:58:07.658397  256287 main.go:141] libmachine: () Calling .GetVersion
I0920 17:58:07.658922  256287 main.go:141] libmachine: Using API Version  1
I0920 17:58:07.658946  256287 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:58:07.659323  256287 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:58:07.659580  256287 main.go:141] libmachine: (functional-024386) Calling .DriverName
I0920 17:58:07.659814  256287 ssh_runner.go:195] Run: systemctl --version
I0920 17:58:07.659855  256287 main.go:141] libmachine: (functional-024386) Calling .GetSSHHostname
I0920 17:58:07.663372  256287 main.go:141] libmachine: (functional-024386) DBG | domain functional-024386 has defined MAC address 52:54:00:08:4e:4d in network mk-functional-024386
I0920 17:58:07.663933  256287 main.go:141] libmachine: (functional-024386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:4e:4d", ip: ""} in network mk-functional-024386: {Iface:virbr1 ExpiryTime:2024-09-20 18:55:16 +0000 UTC Type:0 Mac:52:54:00:08:4e:4d Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:functional-024386 Clientid:01:52:54:00:08:4e:4d}
I0920 17:58:07.663970  256287 main.go:141] libmachine: (functional-024386) DBG | domain functional-024386 has defined IP address 192.168.39.75 and MAC address 52:54:00:08:4e:4d in network mk-functional-024386
I0920 17:58:07.664129  256287 main.go:141] libmachine: (functional-024386) Calling .GetSSHPort
I0920 17:58:07.664345  256287 main.go:141] libmachine: (functional-024386) Calling .GetSSHKeyPath
I0920 17:58:07.664550  256287 main.go:141] libmachine: (functional-024386) Calling .GetSSHUsername
I0920 17:58:07.664709  256287 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/functional-024386/id_rsa Username:docker}
I0920 17:58:07.785700  256287 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 17:58:08.458790  256287 main.go:141] libmachine: Making call to close driver server
I0920 17:58:08.458807  256287 main.go:141] libmachine: (functional-024386) Calling .Close
I0920 17:58:08.459127  256287 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:58:08.459151  256287 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 17:58:08.459160  256287 main.go:141] libmachine: (functional-024386) DBG | Closing plugin on server side
I0920 17:58:08.459169  256287 main.go:141] libmachine: Making call to close driver server
I0920 17:58:08.459179  256287 main.go:141] libmachine: (functional-024386) Calling .Close
I0920 17:58:08.459526  256287 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:58:08.459541  256287 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image ls --format yaml --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p functional-024386 image ls --format yaml --alsologtostderr: (1.323257472s)
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-024386 image ls --format yaml --alsologtostderr:
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: b313a6b3b8781e3a6d0093c1822df653226829959ab459a72dd3a97962673d2d
repoDigests:
- localhost/minikube-local-cache-test@sha256:f7b24924fc7858637f4ec9bfe6ddbf07c29300b31d080a600a9fe1d54f7efedb
repoTags:
- localhost/minikube-local-cache-test:functional-024386
size: "3330"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-024386
size: "4943877"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-024386 image ls --format yaml --alsologtostderr:
I0920 17:57:59.339920  256150 out.go:345] Setting OutFile to fd 1 ...
I0920 17:57:59.340043  256150 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:57:59.340054  256150 out.go:358] Setting ErrFile to fd 2...
I0920 17:57:59.340057  256150 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:57:59.340242  256150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
I0920 17:57:59.340854  256150 config.go:182] Loaded profile config "functional-024386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:57:59.340965  256150 config.go:182] Loaded profile config "functional-024386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 17:57:59.341358  256150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:57:59.341408  256150 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:57:59.357207  256150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
I0920 17:57:59.357844  256150 main.go:141] libmachine: () Calling .GetVersion
I0920 17:57:59.358553  256150 main.go:141] libmachine: Using API Version  1
I0920 17:57:59.358590  256150 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:57:59.359001  256150 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:57:59.359227  256150 main.go:141] libmachine: (functional-024386) Calling .GetState
I0920 17:57:59.361581  256150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 17:57:59.361639  256150 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 17:57:59.377384  256150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
I0920 17:57:59.377976  256150 main.go:141] libmachine: () Calling .GetVersion
I0920 17:57:59.378591  256150 main.go:141] libmachine: Using API Version  1
I0920 17:57:59.378618  256150 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 17:57:59.378999  256150 main.go:141] libmachine: () Calling .GetMachineName
I0920 17:57:59.379265  256150 main.go:141] libmachine: (functional-024386) Calling .DriverName
I0920 17:57:59.379511  256150 ssh_runner.go:195] Run: systemctl --version
I0920 17:57:59.379545  256150 main.go:141] libmachine: (functional-024386) Calling .GetSSHHostname
I0920 17:57:59.383214  256150 main.go:141] libmachine: (functional-024386) DBG | domain functional-024386 has defined MAC address 52:54:00:08:4e:4d in network mk-functional-024386
I0920 17:57:59.383688  256150 main.go:141] libmachine: (functional-024386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:4e:4d", ip: ""} in network mk-functional-024386: {Iface:virbr1 ExpiryTime:2024-09-20 18:55:16 +0000 UTC Type:0 Mac:52:54:00:08:4e:4d Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:functional-024386 Clientid:01:52:54:00:08:4e:4d}
I0920 17:57:59.383721  256150 main.go:141] libmachine: (functional-024386) DBG | domain functional-024386 has defined IP address 192.168.39.75 and MAC address 52:54:00:08:4e:4d in network mk-functional-024386
I0920 17:57:59.383844  256150 main.go:141] libmachine: (functional-024386) Calling .GetSSHPort
I0920 17:57:59.384061  256150 main.go:141] libmachine: (functional-024386) Calling .GetSSHKeyPath
I0920 17:57:59.384234  256150 main.go:141] libmachine: (functional-024386) Calling .GetSSHUsername
I0920 17:57:59.384466  256150 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/functional-024386/id_rsa Username:docker}
I0920 17:57:59.483843  256150 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 17:58:00.602636  256150 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.118742012s)
I0920 17:58:00.603627  256150 main.go:141] libmachine: Making call to close driver server
I0920 17:58:00.603645  256150 main.go:141] libmachine: (functional-024386) Calling .Close
I0920 17:58:00.603992  256150 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:58:00.604016  256150 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 17:58:00.604025  256150 main.go:141] libmachine: Making call to close driver server
I0920 17:58:00.604034  256150 main.go:141] libmachine: (functional-024386) Calling .Close
I0920 17:58:00.603997  256150 main.go:141] libmachine: (functional-024386) DBG | Closing plugin on server side
I0920 17:58:00.604274  256150 main.go:141] libmachine: Successfully made call to close driver server
I0920 17:58:00.604291  256150 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.793460775s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-024386
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image load --daemon kicbase/echo-server:functional-024386 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-024386 image load --daemon kicbase/echo-server:functional-024386 --alsologtostderr: (1.3330957s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image load --daemon kicbase/echo-server:functional-024386 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-024386
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image load --daemon kicbase/echo-server:functional-024386 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image save kicbase/echo-server:functional-024386 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image rm kicbase/echo-server:functional-024386 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-024386
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 image save --daemon kicbase/echo-server:functional-024386 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-024386
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 service list -o json
functional_test.go:1494: Took "263.627733ms" to run "out/minikube-linux-amd64 -p functional-024386 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.75:32605
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.75:32605
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-024386 /tmp/TestFunctionalparallelMountCmdspecific-port3107843833/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024386 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (282.225496ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 17:57:43.231735  244849 retry.go:31] will retry after 263.891666ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-024386 /tmp/TestFunctionalparallelMountCmdspecific-port3107843833/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024386 ssh "sudo umount -f /mount-9p": exit status 1 (244.166198ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-024386 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-024386 /tmp/TestFunctionalparallelMountCmdspecific-port3107843833/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-024386 /tmp/TestFunctionalparallelMountCmdVerifyCleanup441344812/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-024386 /tmp/TestFunctionalparallelMountCmdVerifyCleanup441344812/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-024386 /tmp/TestFunctionalparallelMountCmdVerifyCleanup441344812/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024386 ssh "findmnt -T" /mount1: exit status 1 (439.298336ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 17:57:45.242317  244849 retry.go:31] will retry after 354.018353ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-024386 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-024386 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-024386 /tmp/TestFunctionalparallelMountCmdVerifyCleanup441344812/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-024386 /tmp/TestFunctionalparallelMountCmdVerifyCleanup441344812/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-024386 /tmp/TestFunctionalparallelMountCmdVerifyCleanup441344812/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-024386
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-024386
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-024386
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-347193 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0920 17:58:30.942061  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:30.948559  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:30.960099  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:30.981647  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:31.023183  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:31.104772  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:31.266380  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:31.587974  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:32.230129  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:33.511832  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:36.073740  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:41.195671  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:51.437342  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:11.919036  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:52.881019  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:01:14.802586  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-347193 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m19.081743276s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (199.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-347193 -- rollout status deployment/busybox: (5.711928486s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- exec busybox-7dff88458-85fk6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- exec busybox-7dff88458-p824h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- exec busybox-7dff88458-vv8nw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- exec busybox-7dff88458-85fk6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- exec busybox-7dff88458-p824h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- exec busybox-7dff88458-vv8nw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- exec busybox-7dff88458-85fk6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- exec busybox-7dff88458-p824h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- exec busybox-7dff88458-vv8nw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- exec busybox-7dff88458-85fk6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- exec busybox-7dff88458-85fk6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- exec busybox-7dff88458-p824h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- exec busybox-7dff88458-p824h -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- exec busybox-7dff88458-vv8nw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-347193 -- exec busybox-7dff88458-vv8nw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-347193 -v=7 --alsologtostderr
E0920 18:02:29.487339  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:29.493774  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:29.505240  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:29.526737  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:29.568186  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:29.649478  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:29.811066  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:30.133199  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:30.774810  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:32.056173  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:34.617965  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:39.740009  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-347193 -v=7 --alsologtostderr: (55.842056223s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-347193 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp testdata/cp-test.txt ha-347193:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3833348347/001/cp-test_ha-347193.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193:/home/docker/cp-test.txt ha-347193-m02:/home/docker/cp-test_ha-347193_ha-347193-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m02 "sudo cat /home/docker/cp-test_ha-347193_ha-347193-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193:/home/docker/cp-test.txt ha-347193-m03:/home/docker/cp-test_ha-347193_ha-347193-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m03 "sudo cat /home/docker/cp-test_ha-347193_ha-347193-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193:/home/docker/cp-test.txt ha-347193-m04:/home/docker/cp-test_ha-347193_ha-347193-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193 "sudo cat /home/docker/cp-test.txt"
E0920 18:02:49.981457  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m04 "sudo cat /home/docker/cp-test_ha-347193_ha-347193-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp testdata/cp-test.txt ha-347193-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3833348347/001/cp-test_ha-347193-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193-m02:/home/docker/cp-test.txt ha-347193:/home/docker/cp-test_ha-347193-m02_ha-347193.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193 "sudo cat /home/docker/cp-test_ha-347193-m02_ha-347193.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193-m02:/home/docker/cp-test.txt ha-347193-m03:/home/docker/cp-test_ha-347193-m02_ha-347193-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m03 "sudo cat /home/docker/cp-test_ha-347193-m02_ha-347193-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193-m02:/home/docker/cp-test.txt ha-347193-m04:/home/docker/cp-test_ha-347193-m02_ha-347193-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m04 "sudo cat /home/docker/cp-test_ha-347193-m02_ha-347193-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp testdata/cp-test.txt ha-347193-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3833348347/001/cp-test_ha-347193-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt ha-347193:/home/docker/cp-test_ha-347193-m03_ha-347193.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193 "sudo cat /home/docker/cp-test_ha-347193-m03_ha-347193.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt ha-347193-m02:/home/docker/cp-test_ha-347193-m03_ha-347193-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m02 "sudo cat /home/docker/cp-test_ha-347193-m03_ha-347193-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193-m03:/home/docker/cp-test.txt ha-347193-m04:/home/docker/cp-test_ha-347193-m03_ha-347193-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m04 "sudo cat /home/docker/cp-test_ha-347193-m03_ha-347193-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp testdata/cp-test.txt ha-347193-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3833348347/001/cp-test_ha-347193-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt ha-347193:/home/docker/cp-test_ha-347193-m04_ha-347193.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193 "sudo cat /home/docker/cp-test_ha-347193-m04_ha-347193.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt ha-347193-m02:/home/docker/cp-test_ha-347193-m04_ha-347193-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m02 "sudo cat /home/docker/cp-test_ha-347193-m04_ha-347193-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 cp ha-347193-m04:/home/docker/cp-test.txt ha-347193-m03:/home/docker/cp-test_ha-347193-m04_ha-347193-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 ssh -n ha-347193-m03 "sudo cat /home/docker/cp-test_ha-347193-m04_ha-347193-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.030810519s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-347193 node delete m03 -v=7 --alsologtostderr: (15.831116246s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (349.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-347193 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0920 18:14:54.006194  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:17:29.488190  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:30.942099  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:18:52.557393  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-347193 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m48.948302876s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (349.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-347193 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-347193 --control-plane -v=7 --alsologtostderr: (1m19.005024335s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-347193 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.3s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-100720 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0920 18:22:29.486815  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-100720 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (52.297148966s)
--- PASS: TestJSONOutput/start/Command (52.30s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-100720 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-100720 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.65s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-100720 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-100720 --output=json --user=testUser: (6.650167855s)
--- PASS: TestJSONOutput/stop/Command (6.65s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-786628 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-786628 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (67.314139ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"eb4d255d-66ac-4b4f-8a41-4c43604ca46b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-786628] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a1c85a24-4051-4726-b9f5-8a2bb40cbbc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19679"}}
	{"specversion":"1.0","id":"86552e65-dee2-48a5-a8a7-04980c054c12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4bae6e54-cf1e-4fc2-9dd6-c13e338579d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig"}}
	{"specversion":"1.0","id":"1f4d8b20-89ac-4f2e-91a8-52d0b73d6c9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube"}}
	{"specversion":"1.0","id":"3f0a1369-415f-4b6a-bb06-21776f88da62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e333f4a7-968d-4abe-a484-0d94946275a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5b72df3a-c5d7-45af-ae90-ba661e70983a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-786628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-786628
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-939506 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-939506 --driver=kvm2  --container-runtime=crio: (44.379932828s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-953605 --driver=kvm2  --container-runtime=crio
E0920 18:23:30.942647  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-953605 --driver=kvm2  --container-runtime=crio: (41.762452509s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-939506
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-953605
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-953605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-953605
helpers_test.go:175: Cleaning up "first-939506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-939506
--- PASS: TestMinikubeProfile (88.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-417379 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-417379 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.701134722s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-417379 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-417379 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-436956 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-436956 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.136464056s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-436956 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-436956 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-417379 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-436956 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-436956 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-436956
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-436956: (1.281748796s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.18s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-436956
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-436956: (20.180601181s)
--- PASS: TestMountStart/serial/RestartStopped (21.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-436956 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-436956 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-029872 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-029872 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.847620395s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029872 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029872 -- rollout status deployment/busybox
E0920 18:27:29.487724  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-029872 -- rollout status deployment/busybox: (4.32820684s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029872 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029872 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029872 -- exec busybox-7dff88458-8vvbm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029872 -- exec busybox-7dff88458-xff6m -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029872 -- exec busybox-7dff88458-8vvbm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029872 -- exec busybox-7dff88458-xff6m -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029872 -- exec busybox-7dff88458-8vvbm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029872 -- exec busybox-7dff88458-xff6m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.82s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029872 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029872 -- exec busybox-7dff88458-8vvbm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029872 -- exec busybox-7dff88458-8vvbm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029872 -- exec busybox-7dff88458-xff6m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029872 -- exec busybox-7dff88458-xff6m -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-029872 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-029872 -v 3 --alsologtostderr: (49.154251351s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.75s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-029872 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 cp testdata/cp-test.txt multinode-029872:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 cp multinode-029872:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2317310210/001/cp-test_multinode-029872.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 cp multinode-029872:/home/docker/cp-test.txt multinode-029872-m02:/home/docker/cp-test_multinode-029872_multinode-029872-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872-m02 "sudo cat /home/docker/cp-test_multinode-029872_multinode-029872-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 cp multinode-029872:/home/docker/cp-test.txt multinode-029872-m03:/home/docker/cp-test_multinode-029872_multinode-029872-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872-m03 "sudo cat /home/docker/cp-test_multinode-029872_multinode-029872-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 cp testdata/cp-test.txt multinode-029872-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 cp multinode-029872-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2317310210/001/cp-test_multinode-029872-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 cp multinode-029872-m02:/home/docker/cp-test.txt multinode-029872:/home/docker/cp-test_multinode-029872-m02_multinode-029872.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872 "sudo cat /home/docker/cp-test_multinode-029872-m02_multinode-029872.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 cp multinode-029872-m02:/home/docker/cp-test.txt multinode-029872-m03:/home/docker/cp-test_multinode-029872-m02_multinode-029872-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872-m03 "sudo cat /home/docker/cp-test_multinode-029872-m02_multinode-029872-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 cp testdata/cp-test.txt multinode-029872-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 cp multinode-029872-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2317310210/001/cp-test_multinode-029872-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 cp multinode-029872-m03:/home/docker/cp-test.txt multinode-029872:/home/docker/cp-test_multinode-029872-m03_multinode-029872.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872 "sudo cat /home/docker/cp-test_multinode-029872-m03_multinode-029872.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 cp multinode-029872-m03:/home/docker/cp-test.txt multinode-029872-m02:/home/docker/cp-test_multinode-029872-m03_multinode-029872-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 ssh -n multinode-029872-m02 "sudo cat /home/docker/cp-test_multinode-029872-m03_multinode-029872-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 node stop m03
E0920 18:28:30.942084  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-029872 node stop m03: (1.356152329s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-029872 status: exit status 7 (441.073422ms)

                                                
                                                
-- stdout --
	multinode-029872
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-029872-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-029872-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-029872 status --alsologtostderr: exit status 7 (444.535739ms)

                                                
                                                
-- stdout --
	multinode-029872
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-029872-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-029872-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:28:32.602763  273488 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:28:32.602893  273488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:28:32.602902  273488 out.go:358] Setting ErrFile to fd 2...
	I0920 18:28:32.602907  273488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:28:32.603092  273488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:28:32.603277  273488 out.go:352] Setting JSON to false
	I0920 18:28:32.603308  273488 mustload.go:65] Loading cluster: multinode-029872
	I0920 18:28:32.603347  273488 notify.go:220] Checking for updates...
	I0920 18:28:32.603719  273488 config.go:182] Loaded profile config "multinode-029872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:28:32.603737  273488 status.go:174] checking status of multinode-029872 ...
	I0920 18:28:32.604118  273488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:28:32.604161  273488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:28:32.624360  273488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41329
	I0920 18:28:32.625009  273488 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:28:32.625636  273488 main.go:141] libmachine: Using API Version  1
	I0920 18:28:32.625671  273488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:28:32.626095  273488 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:28:32.626295  273488 main.go:141] libmachine: (multinode-029872) Calling .GetState
	I0920 18:28:32.628061  273488 status.go:364] multinode-029872 host status = "Running" (err=<nil>)
	I0920 18:28:32.628079  273488 host.go:66] Checking if "multinode-029872" exists ...
	I0920 18:28:32.628419  273488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:28:32.628469  273488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:28:32.644965  273488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I0920 18:28:32.645426  273488 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:28:32.646010  273488 main.go:141] libmachine: Using API Version  1
	I0920 18:28:32.646044  273488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:28:32.646385  273488 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:28:32.646585  273488 main.go:141] libmachine: (multinode-029872) Calling .GetIP
	I0920 18:28:32.649774  273488 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:28:32.650282  273488 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:28:32.650309  273488 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:28:32.650458  273488 host.go:66] Checking if "multinode-029872" exists ...
	I0920 18:28:32.650795  273488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:28:32.650851  273488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:28:32.667277  273488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45321
	I0920 18:28:32.667806  273488 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:28:32.668384  273488 main.go:141] libmachine: Using API Version  1
	I0920 18:28:32.668410  273488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:28:32.668713  273488 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:28:32.668913  273488 main.go:141] libmachine: (multinode-029872) Calling .DriverName
	I0920 18:28:32.669126  273488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:28:32.669150  273488 main.go:141] libmachine: (multinode-029872) Calling .GetSSHHostname
	I0920 18:28:32.672683  273488 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:28:32.673127  273488 main.go:141] libmachine: (multinode-029872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:17:52", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:25:44 +0000 UTC Type:0 Mac:52:54:00:ad:17:52 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-029872 Clientid:01:52:54:00:ad:17:52}
	I0920 18:28:32.673162  273488 main.go:141] libmachine: (multinode-029872) DBG | domain multinode-029872 has defined IP address 192.168.39.208 and MAC address 52:54:00:ad:17:52 in network mk-multinode-029872
	I0920 18:28:32.673523  273488 main.go:141] libmachine: (multinode-029872) Calling .GetSSHPort
	I0920 18:28:32.675243  273488 main.go:141] libmachine: (multinode-029872) Calling .GetSSHKeyPath
	I0920 18:28:32.675755  273488 main.go:141] libmachine: (multinode-029872) Calling .GetSSHUsername
	I0920 18:28:32.675985  273488 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/multinode-029872/id_rsa Username:docker}
	I0920 18:28:32.757329  273488 ssh_runner.go:195] Run: systemctl --version
	I0920 18:28:32.763176  273488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:28:32.777597  273488 kubeconfig.go:125] found "multinode-029872" server: "https://192.168.39.208:8443"
	I0920 18:28:32.777639  273488 api_server.go:166] Checking apiserver status ...
	I0920 18:28:32.777683  273488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:28:32.791953  273488 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1083/cgroup
	W0920 18:28:32.802958  273488 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1083/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:28:32.803014  273488 ssh_runner.go:195] Run: ls
	I0920 18:28:32.807814  273488 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I0920 18:28:32.812301  273488 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I0920 18:28:32.812343  273488 status.go:456] multinode-029872 apiserver status = Running (err=<nil>)
	I0920 18:28:32.812354  273488 status.go:176] multinode-029872 status: &{Name:multinode-029872 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:28:32.812375  273488 status.go:174] checking status of multinode-029872-m02 ...
	I0920 18:28:32.812702  273488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:28:32.812810  273488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:28:32.830057  273488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43741
	I0920 18:28:32.830702  273488 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:28:32.831288  273488 main.go:141] libmachine: Using API Version  1
	I0920 18:28:32.831315  273488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:28:32.831869  273488 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:28:32.832123  273488 main.go:141] libmachine: (multinode-029872-m02) Calling .GetState
	I0920 18:28:32.834309  273488 status.go:364] multinode-029872-m02 host status = "Running" (err=<nil>)
	I0920 18:28:32.834334  273488 host.go:66] Checking if "multinode-029872-m02" exists ...
	I0920 18:28:32.834693  273488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:28:32.834756  273488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:28:32.851334  273488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37065
	I0920 18:28:32.851839  273488 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:28:32.852321  273488 main.go:141] libmachine: Using API Version  1
	I0920 18:28:32.852347  273488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:28:32.852620  273488 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:28:32.852743  273488 main.go:141] libmachine: (multinode-029872-m02) Calling .GetIP
	I0920 18:28:32.855246  273488 main.go:141] libmachine: (multinode-029872-m02) DBG | domain multinode-029872-m02 has defined MAC address 52:54:00:ee:4f:d9 in network mk-multinode-029872
	I0920 18:28:32.855696  273488 main.go:141] libmachine: (multinode-029872-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:4f:d9", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:26:47 +0000 UTC Type:0 Mac:52:54:00:ee:4f:d9 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-029872-m02 Clientid:01:52:54:00:ee:4f:d9}
	I0920 18:28:32.855721  273488 main.go:141] libmachine: (multinode-029872-m02) DBG | domain multinode-029872-m02 has defined IP address 192.168.39.168 and MAC address 52:54:00:ee:4f:d9 in network mk-multinode-029872
	I0920 18:28:32.855886  273488 host.go:66] Checking if "multinode-029872-m02" exists ...
	I0920 18:28:32.856179  273488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:28:32.856230  273488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:28:32.873459  273488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39085
	I0920 18:28:32.873874  273488 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:28:32.874460  273488 main.go:141] libmachine: Using API Version  1
	I0920 18:28:32.874485  273488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:28:32.874834  273488 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:28:32.875071  273488 main.go:141] libmachine: (multinode-029872-m02) Calling .DriverName
	I0920 18:28:32.875273  273488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:28:32.875293  273488 main.go:141] libmachine: (multinode-029872-m02) Calling .GetSSHHostname
	I0920 18:28:32.878437  273488 main.go:141] libmachine: (multinode-029872-m02) DBG | domain multinode-029872-m02 has defined MAC address 52:54:00:ee:4f:d9 in network mk-multinode-029872
	I0920 18:28:32.879017  273488 main.go:141] libmachine: (multinode-029872-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:4f:d9", ip: ""} in network mk-multinode-029872: {Iface:virbr1 ExpiryTime:2024-09-20 19:26:47 +0000 UTC Type:0 Mac:52:54:00:ee:4f:d9 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-029872-m02 Clientid:01:52:54:00:ee:4f:d9}
	I0920 18:28:32.879048  273488 main.go:141] libmachine: (multinode-029872-m02) DBG | domain multinode-029872-m02 has defined IP address 192.168.39.168 and MAC address 52:54:00:ee:4f:d9 in network mk-multinode-029872
	I0920 18:28:32.879168  273488 main.go:141] libmachine: (multinode-029872-m02) Calling .GetSSHPort
	I0920 18:28:32.879340  273488 main.go:141] libmachine: (multinode-029872-m02) Calling .GetSSHKeyPath
	I0920 18:28:32.879472  273488 main.go:141] libmachine: (multinode-029872-m02) Calling .GetSSHUsername
	I0920 18:28:32.879593  273488 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19679-237658/.minikube/machines/multinode-029872-m02/id_rsa Username:docker}
	I0920 18:28:32.962409  273488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:28:32.980539  273488 status.go:176] multinode-029872-m02 status: &{Name:multinode-029872-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:28:32.980579  273488 status.go:174] checking status of multinode-029872-m03 ...
	I0920 18:28:32.980891  273488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:28:32.980931  273488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:28:32.997418  273488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42479
	I0920 18:28:32.997946  273488 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:28:32.998562  273488 main.go:141] libmachine: Using API Version  1
	I0920 18:28:32.998594  273488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:28:32.998951  273488 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:28:32.999161  273488 main.go:141] libmachine: (multinode-029872-m03) Calling .GetState
	I0920 18:28:33.000978  273488 status.go:364] multinode-029872-m03 host status = "Stopped" (err=<nil>)
	I0920 18:28:33.000993  273488 status.go:377] host is not running, skipping remaining checks
	I0920 18:28:33.000999  273488 status.go:176] multinode-029872-m03 status: &{Name:multinode-029872-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-029872 node start m03 -v=7 --alsologtostderr: (38.90421245s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-029872 node delete m03: (1.603191999s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (178.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-029872 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0920 18:37:29.487291  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:38:30.942788  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-029872 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m57.465400496s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029872 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (178.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-029872
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-029872-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-029872-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (67.58185ms)

                                                
                                                
-- stdout --
	* [multinode-029872-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-029872-m02' is duplicated with machine name 'multinode-029872-m02' in profile 'multinode-029872'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-029872-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-029872-m03 --driver=kvm2  --container-runtime=crio: (43.581633023s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-029872
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-029872: exit status 80 (208.749436ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-029872 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-029872-m03 already exists in multinode-029872-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-029872-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.72s)

                                                
                                    
x
+
TestScheduledStopUnix (114.56s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-556796 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-556796 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.889748724s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-556796 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-556796 -n scheduled-stop-556796
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-556796 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 18:44:28.040169  244849 retry.go:31] will retry after 56.943µs: open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/scheduled-stop-556796/pid: no such file or directory
I0920 18:44:28.041361  244849 retry.go:31] will retry after 111.209µs: open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/scheduled-stop-556796/pid: no such file or directory
I0920 18:44:28.042539  244849 retry.go:31] will retry after 247.493µs: open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/scheduled-stop-556796/pid: no such file or directory
I0920 18:44:28.043687  244849 retry.go:31] will retry after 379.743µs: open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/scheduled-stop-556796/pid: no such file or directory
I0920 18:44:28.044857  244849 retry.go:31] will retry after 336.36µs: open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/scheduled-stop-556796/pid: no such file or directory
I0920 18:44:28.045960  244849 retry.go:31] will retry after 898.109µs: open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/scheduled-stop-556796/pid: no such file or directory
I0920 18:44:28.047118  244849 retry.go:31] will retry after 746.378µs: open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/scheduled-stop-556796/pid: no such file or directory
I0920 18:44:28.048252  244849 retry.go:31] will retry after 2.415918ms: open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/scheduled-stop-556796/pid: no such file or directory
I0920 18:44:28.051472  244849 retry.go:31] will retry after 2.742595ms: open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/scheduled-stop-556796/pid: no such file or directory
I0920 18:44:28.054727  244849 retry.go:31] will retry after 3.289479ms: open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/scheduled-stop-556796/pid: no such file or directory
I0920 18:44:28.058978  244849 retry.go:31] will retry after 8.270563ms: open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/scheduled-stop-556796/pid: no such file or directory
I0920 18:44:28.068261  244849 retry.go:31] will retry after 8.234208ms: open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/scheduled-stop-556796/pid: no such file or directory
I0920 18:44:28.077547  244849 retry.go:31] will retry after 10.141229ms: open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/scheduled-stop-556796/pid: no such file or directory
I0920 18:44:28.088844  244849 retry.go:31] will retry after 16.152639ms: open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/scheduled-stop-556796/pid: no such file or directory
I0920 18:44:28.106163  244849 retry.go:31] will retry after 42.005409ms: open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/scheduled-stop-556796/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-556796 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-556796 -n scheduled-stop-556796
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-556796
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-556796 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-556796
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-556796: exit status 7 (65.928583ms)

                                                
                                                
-- stdout --
	scheduled-stop-556796
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-556796 -n scheduled-stop-556796
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-556796 -n scheduled-stop-556796: exit status 7 (63.116483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-556796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-556796
--- PASS: TestScheduledStopUnix (114.56s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (164.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1972459501 start -p running-upgrade-901769 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0920 18:48:30.942126  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/addons-679190/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1972459501 start -p running-upgrade-901769 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m25.308907527s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-901769 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-901769 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m14.853110112s)
helpers_test.go:175: Cleaning up "running-upgrade-901769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-901769
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-901769: (1.454858753s)
--- PASS: TestRunningBinaryUpgrade (164.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-115059 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-115059 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (84.932112ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-115059] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (114.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-115059 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-115059 --driver=kvm2  --container-runtime=crio: (1m54.675044957s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-115059 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (114.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-793540 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-793540 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (102.63875ms)

                                                
                                                
-- stdout --
	* [false-793540] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:46:24.446597  281590 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:46:24.446720  281590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:46:24.446729  281590 out.go:358] Setting ErrFile to fd 2...
	I0920 18:46:24.446732  281590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:46:24.446893  281590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-237658/.minikube/bin
	I0920 18:46:24.447475  281590 out.go:352] Setting JSON to false
	I0920 18:46:24.448428  281590 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8927,"bootTime":1726849057,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:46:24.448552  281590 start.go:139] virtualization: kvm guest
	I0920 18:46:24.451156  281590 out.go:177] * [false-793540] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:46:24.452529  281590 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:46:24.452556  281590 notify.go:220] Checking for updates...
	I0920 18:46:24.455142  281590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:46:24.456365  281590 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-237658/kubeconfig
	I0920 18:46:24.457643  281590 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-237658/.minikube
	I0920 18:46:24.458876  281590 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:46:24.460182  281590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:46:24.461833  281590 config.go:182] Loaded profile config "NoKubernetes-115059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:46:24.461952  281590 config.go:182] Loaded profile config "kubernetes-upgrade-149276": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 18:46:24.462040  281590 config.go:182] Loaded profile config "offline-crio-106360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:46:24.462149  281590 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:46:24.498481  281590 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:46:24.499609  281590 start.go:297] selected driver: kvm2
	I0920 18:46:24.499624  281590 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:46:24.499636  281590 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:46:24.501691  281590 out.go:201] 
	W0920 18:46:24.503293  281590 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0920 18:46:24.504565  281590 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-793540 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-793540

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-793540

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-793540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-793540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-793540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-793540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-793540

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-793540

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-793540

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-793540

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-793540

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-793540" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-793540" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-793540

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793540"

                                                
                                                
----------------------- debugLogs end: false-793540 [took: 2.919278409s] --------------------------------
helpers_test.go:175: Cleaning up "false-793540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-793540
--- PASS: TestNetworkPlugins/group/false (3.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-115059 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-115059 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.036880211s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-115059 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-115059 status -o json: exit status 2 (229.728676ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-115059","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-115059
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-115059 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-115059 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.27207896s)
--- PASS: TestNoKubernetes/serial/Start (27.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-115059 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-115059 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.559648ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-115059
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-115059: (1.299828971s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (20.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-115059 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-115059 --driver=kvm2  --container-runtime=crio: (20.684727259s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (20.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-115059 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-115059 "sudo systemctl is-active --quiet service kubelet": exit status 1 (215.169214ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (115.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.48636737 start -p stopped-upgrade-108885 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.48636737 start -p stopped-upgrade-108885 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m7.657271253s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.48636737 -p stopped-upgrade-108885 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.48636737 -p stopped-upgrade-108885 stop: (1.444671234s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-108885 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-108885 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.086307108s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (115.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-108885
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestPause/serial/Start (78.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-554447 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-554447 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m18.673608537s)
--- PASS: TestPause/serial/Start (78.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-793540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-793540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m18.903103839s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (102.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-793540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0920 18:52:12.560330  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-793540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m42.34867008s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (102.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (111.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-793540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0920 18:52:29.487747  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-793540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m51.906433165s)
--- PASS: TestNetworkPlugins/group/calico/Start (111.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-793540 "pgrep -a kubelet"
I0920 18:53:03.837008  244849 config.go:182] Loaded profile config "auto-793540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-793540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t5dm2" [65264599-859b-43cc-9190-db8221db99f7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-t5dm2" [65264599-859b-43cc-9190-db8221db99f7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.005201831s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-793540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-793540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-793540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-j7jm2" [6582f616-d0a9-41cb-ae12-f7b635b889c6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005653845s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-793540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-793540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m13.037166335s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-793540 "pgrep -a kubelet"
I0920 18:53:40.221037  244849 config.go:182] Loaded profile config "kindnet-793540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-793540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ws6nx" [f1242527-2991-4610-b9ca-aa500fb2d8e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ws6nx" [f1242527-2991-4610-b9ca-aa500fb2d8e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004551885s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-793540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-793540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-793540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-793540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-793540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m7.623162624s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (98.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-793540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-793540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m38.263219333s)
--- PASS: TestNetworkPlugins/group/flannel/Start (98.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-sqvjh" [49a8b825-b3d4-417a-ad43-79f5ae930088] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005195246s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-793540 "pgrep -a kubelet"
I0920 18:54:21.848517  244849 config.go:182] Loaded profile config "calico-793540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-793540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rn6gg" [6ac9ebe8-5ce7-423e-b165-e31f37379eaa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rn6gg" [6ac9ebe8-5ce7-423e-b165-e31f37379eaa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004314834s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-793540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-793540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-793540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-793540 "pgrep -a kubelet"
I0920 18:54:48.027718  244849 config.go:182] Loaded profile config "custom-flannel-793540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-793540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jvsr4" [e7fb77e1-56e5-442d-a05c-e597496f8b2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jvsr4" [e7fb77e1-56e5-442d-a05c-e597496f8b2a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005276069s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (58.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-793540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-793540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (58.908951628s)
--- PASS: TestNetworkPlugins/group/bridge/Start (58.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-793540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-793540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-793540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-793540 "pgrep -a kubelet"
I0920 18:55:13.672724  244849 config.go:182] Loaded profile config "enable-default-cni-793540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-793540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-td4xg" [f7048597-d214-429a-9597-5d67bf2d2802] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-td4xg" [f7048597-d214-429a-9597-5d67bf2d2802] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003916189s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-793540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-793540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-793540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-037711 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-037711 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m19.201299386s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-n9ltq" [bb069d51-4ea9-45ad-8346-284c59a160cf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004978237s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-793540 "pgrep -a kubelet"
I0920 18:55:51.112498  244849 config.go:182] Loaded profile config "bridge-793540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-793540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nl6nt" [8dca2480-a338-4587-9cad-7aa825fe7739] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nl6nt" [8dca2480-a338-4587-9cad-7aa825fe7739] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.005338417s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-793540 "pgrep -a kubelet"
I0920 18:55:56.389250  244849 config.go:182] Loaded profile config "flannel-793540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-793540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pj2m2" [d310ab44-c548-4b3e-b15c-094defe1cff2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pj2m2" [d310ab44-c548-4b3e-b15c-094defe1cff2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.004009506s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-793540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-793540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-793540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-793540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-793540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-793540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)
E0920 19:25:50.145094  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:25:51.355574  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (60.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-339897 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-339897 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m0.770802883s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (60.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-612312 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-612312 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m11.237613455s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-037711 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a0f181b6-ca4e-4915-a665-d703ce9a2a2f] Pending
helpers_test.go:344: "busybox" [a0f181b6-ca4e-4915-a665-d703ce9a2a2f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a0f181b6-ca4e-4915-a665-d703ce9a2a2f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004789166s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-037711 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-037711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-037711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.018047969s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-037711 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-339897 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [93ed91b2-cadf-461a-a6f9-57509443ac39] Pending
helpers_test.go:344: "busybox" [93ed91b2-cadf-461a-a6f9-57509443ac39] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [93ed91b2-cadf-461a-a6f9-57509443ac39] Running
E0920 18:57:29.487228  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004922031s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-339897 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-339897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-339897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.005133227s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-339897 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-612312 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [07eaf378-3cf0-4ff2-9742-d7fa0a2ef5df] Pending
helpers_test.go:344: "busybox" [07eaf378-3cf0-4ff2-9742-d7fa0a2ef5df] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [07eaf378-3cf0-4ff2-9742-d7fa0a2ef5df] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004431108s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-612312 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-612312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-612312 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (684.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-037711 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 18:59:48.376778  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:48.383182  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:48.394558  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:48.416017  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:48.457492  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:48.539057  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:48.700664  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:49.022422  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:49.664302  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:50.945645  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-037711 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (11m24.071788185s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-037711 -n no-preload-037711
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (684.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (594.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-339897 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 19:00:08.872030  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-339897 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m54.510786875s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-339897 -n embed-certs-339897
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (594.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (561.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-612312 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 19:00:24.215761  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:29.353850  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:34.457793  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:37.554761  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/calico-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:47.958172  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/auto-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:50.144992  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:50.151533  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:50.163016  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:50.184617  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:50.226171  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:50.307760  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:50.469439  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:50.791292  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:51.355673  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:51.362086  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:51.373533  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:51.395020  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:51.433535  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:51.436938  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:51.518445  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:51.680024  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:52.001822  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:52.644050  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:52.715564  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:53.926168  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:54.939310  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/enable-default-cni-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:55.277087  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:56.487834  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:00.398517  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:01.610008  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:10.315753  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/custom-flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:10.640755  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/flannel-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:11.851903  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/bridge-793540/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:17.825854  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/kindnet-793540/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-612312 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m21.249459205s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612312 -n default-k8s-diff-port-612312
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (561.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-425599 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-425599 --alsologtostderr -v=3: (4.558200237s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-425599 -n old-k8s-version-425599: exit status 7 (64.419973ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-425599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-398410 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 19:25:32.563808  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/functional-024386/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-398410 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (46.363523797s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-398410 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-398410 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.058317256s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-398410 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-398410 --alsologtostderr -v=3: (10.590563786s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-398410 -n newest-cni-398410
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-398410 -n newest-cni-398410: exit status 7 (79.056229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-398410 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-398410 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-398410 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (39.047986278s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-398410 -n newest-cni-398410
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-398410 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-398410 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-398410 -n newest-cni-398410
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-398410 -n newest-cni-398410: exit status 2 (253.759275ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-398410 -n newest-cni-398410
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-398410 -n newest-cni-398410: exit status 2 (247.804565ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-398410 --alsologtostderr -v=1
E0920 19:27:02.629876  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:27:02.636414  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:27:02.647868  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:27:02.669393  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:27:02.710878  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:27:02.792861  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:27:02.954937  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-398410 -n newest-cni-398410
E0920 19:27:03.276972  244849 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-237658/.minikube/profiles/no-preload-037711/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-398410 -n newest-cni-398410
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.41s)

                                                
                                    

Test skip (37/311)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
37 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
141 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
254 TestNetworkPlugins/group/kubenet 2.79
262 TestNetworkPlugins/group/cilium 4.01
278 TestStartStop/group/disable-driver-mounts 0.24
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-793540 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-793540

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-793540

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-793540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-793540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-793540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-793540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-793540

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-793540

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-793540

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-793540

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-793540

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-793540" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-793540" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-793540

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793540"

                                                
                                                
----------------------- debugLogs end: kubenet-793540 [took: 2.653999091s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-793540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-793540
--- SKIP: TestNetworkPlugins/group/kubenet (2.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-793540 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-793540" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-793540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-793540" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-793540

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-793540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793540"

                                                
                                                
----------------------- debugLogs end: cilium-793540 [took: 3.841195555s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-793540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-793540
--- SKIP: TestNetworkPlugins/group/cilium (4.01s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-896665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-896665
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard